i
SECURE CLOUD STORAGE MODEL TO PRESERVE CONFIDENTIALITY
AND INTEGRITY
SARFRAZ NAWAZ BROHI
A thesis submitted in fulfilment of the
requirements for the award of the degree of
Doctor of Software Engineering
Advanced Informatics School
Universiti Teknologi Malaysia
JANUARY 2015
iii
To
my supportive parents,
and
beloved siblings
iv
ACKNOWLEDGEMENT
First of All, I thank ALLAH (SWT), the God Almighty, for granting me the
health, knowledge, strength, ability, and patience to accomplish this research, and for
blessing me with sympathetic and supportive supervisors as well as family members.
I am glad to express tremendous gratitude to my supervisor Dr Suriayati
Chuprat for her compassionate character, knowledge sharing, ideas and continuous
support from the first until the last day of this study. Her sincere behaviour and
constructive feedback enabled me to achieve significant research milestones within
the required time-frame.
I would also like to thank my external supervisor Dr Jamalul-lail Ab Manan
for enriching me with innovative ideas and skills by sharing his expertise and
knowledge in the field of cloud computing security. Due to his unlimited support for
reviewing, improving and evaluating my research, I was able to publish several high
quality research papers.
At various stages during this study, I faced several undesirable challenges
which overburdened me with mental and physical stress. However, this never
stopped me from progressing further due to encouraging, moral as well as financial
support from my father Dr Muhammad Nawaz Brohi. I am extremely thankful to
him for his understanding, kindness, believe, and trust on me.
I also wish to express deepest appreciation to my mother for her prayers
regarding my success during this entire study. I will always remember my late
grandmother in prayers. This research would have never been possible without her
wishes for my success.
v
ABSTRACT
Cloud Service Providers (CSPs) offer remotely located cloud storage services
to business organizations which include cost-effective advantages. From an
industrial perspective, Amazon Simple Storage Service (S3) and Google Cloud
Storage (GCS) are the leading cloud storage services. These storages are secured
using the latest data security approaches such as cryptography algorithms, data
auditing processes, and strict access control policies. However, organizations where
confidentiality of information is a significant act, they are not assertive to adopt these
services due to emerging data confidentiality and integrity concerns. Malicious
attackers have violated the cloud storages to steal, view, manipulate, and tamper
clients’ data. The researchers have attempted to overcome these shortcomings by
designing and developing various security models. These solutions incorporate
limitations and require enhancements as well as improvements before they can be
widely accepted by CSPs to guarantee secure cloud storage services. In order to
solve the stated problem, this research developed an improved security solution
namely Secure Cloud Storage Model (SCSM) which consists of Multi-factor
authentication and authorization process using Role-Based Access Control (RBAC)
with Complex Random Security Code Generator (CRSCG), Partial homomorphic
cryptography using Rivest, Shamir and Adleman (RSA) algorithm, Trusted Third
Party (TTP) services including Key Management (KM) approach and data auditing
process, Implementation of 256-bit Secure Socket Layer (SSL), and Service Level
Agreement (SLA). SCSM was implemented using Java Enterprise Edition with
glassfish server and deployed on a cloud computing infrastructure. The model was
evaluated using extended euclidean algorithm, system security analysis, key
management recommendations, web-based testing tool, security scanner, and survey.
The survey results presented that 83.33% of the respondents agreed for SCSM to be
widely accepted by CSPs to offer secured cloud storage services. The aggregate
evaluation results proved that SCSM is successful in preserving data confidentiality
and integrity at remotely located cloud storages.
vi
ABSTRAK
Penyedia perkhidmatan awan (CSP) menawarkan servis storan awan secara
jauh yang memberi kelebihan kos yang efektif. Mengikut perspektif industri,
Amazon Simple Storage Service (S3) dan Google Cloud Storage (GCS) merupakan
peneraju utama servis storan awan. Storan ini adalah selamat kerana mereka
menggunakan pendekatan keselamatan data yang terkini seperti algoritma
kriptografi, proses pengauditan data serta polisi kawalan capaian yang ketat. Walau
bagaimanapun, bagi organisasi yang mengutamakan kerahsiaan maklumat, mereka
tidak tertarik untuk menggunakan servis tersebut kerana bimbang akan kerahsiaan
dan integriti data. Penyerang yang berniat jahat telah mencabuli storan awan dengan
mencuri, melihat, memanipulasi dan mengganggu data pelanggan. Para penyelidik
telah mencuba menangani masalah-masalah ini dengan mereka bentuk dan
membangunkan pelbagai model keselamatan. Penyelesaian yang telah dibangunkan
ini masih mempunyai had tertentu dan memerlukan penambahbaikan sebelum ianya
diterima secara meluas oleh CSP demi menjamin keselamatan servis tersebut. Untuk
menyelesaikan masalah yang dinyatakan, penyelidikan ini telah membangunkan
penyelesaian keselamatan yang telah ditambahbaik dan ianya dinamakan Secure
Cloud Storage Model (SCSM). Model ini terdiri daripada pengesahan pelbagai-
faktor, proses kebenaran menggunakan Role-Based Access Control (RBAC) dengan
Complex Random Security Code Generator (CRSCG), kriptografi homomorphic
separa menggunakan algoritma Rivest, Shamir and Adleman (RSA), servis-servis
Trusted Third Party (TTP) iaitu pendekatan pengurusan kunci (KM) dan proses
pengauditan data, perlaksanaan Secure Socket Layer (SSL) 256-bit, dan Service
Level Agreement (SLA). SCSM dibangunkan menggunakan Java Enterprise Edition
dengan pelayan Glassfish dan dilaksanakan pada infrastruktur pengkomputeran
awan. Model ini kemudiannya dinilai menggunakan algoritma Extended Euclidean,
analisis keselamatan sistem, cadangan-cadangan pengurusan kunci, alatan ujian
berasaskan sesawang, pengimbas keselamatan serta kajian. Hasil kajian
menunjukkan 83.33% responden bersetuju SCSM boleh diterima secara meluas oleh
CSP yang menawarkan servis storan awan yang selamat. Keputusan penilaian
membuktikan SCSM berjaya dalam memelihara kerahsiaan data dan integriti pada
storan awan jarak jauh.
vii
TABLE OF CONTENTS
CHAPTER TITLE PAGE
DECLARATION ii
ACKNOWLEDGEMENT iv
ABSTRACT v
ABSTRAK vi
TABLE OF CONTENTS vii
LIST OF TABLES xiii
LIST OF FIGURES xiv
LIST OF ABBREVIATIONS xvii
LIST OF SYMBOLS xx
LIST OF APPENDICES xxi
1 INTRODUCTION 1
1.1 Overview 1
1.2 Problem Background 2
1.3 Problem Statement 6
1.4 Research Objectives 7
1.5 Scope of Research 9
1.6 Significance of Research 10
1.7 Contribution of Research 10
1.8 Thesis Organization 11
1.9 Summary 13
2 LITERATURE REVIEW 14
2.1 Introduction 14
viii
2.2 Cloud Deployment Models 16
2.2.1 Public Cloud 16
2.2.2 Private Cloud 17
2.2.3 Hybrid Cloud 18
2.2.4 Community Cloud 19
2.3 Cloud Service Delivery Models 20
2.3.1 Software as a Service 20
2.3.2 Platform as a Service 21
2.3.3 Infrastructure as a Service 21
2.4 Cloud Storage Services 22
2.5 Cloud Storage Data Security Concerns 23
2.5.1 Data Confidentiality 23
2.5.2 Data Integrity 24
2.6 Data Protection Mechanisms for Cloud Storages 25
2.6.1 Cryptography and Key Management 25
2.6.2 Trusted Computing 26
2.6.3 Access Control Mechanisms 27
2.6.4 Service Level Agreement 27
2.6.5 Data Auditing Services 28
2.7 Industry Based Implementations of Cloud Storage
Services 29
2.7.1 Amazon Simple Storage Service 29
2.7.2 Google Cloud Storage 33
2.8 Limitations of Industry Implemented Cloud Storage
Services 37
2.8.1 Vulnerable Key Management Approach 39
2.8.2 Inadequate Cryptographic Support 40
2.8.3 Exclusion of Security Assurance in Service
Level Agreements 40
2.8.4 Untrustworthy Data Integrity Verification
Services 41
2.9 Confidentiality and Integrity Preserving
Cloud Storage Models 42
ix
2.9.1 Secure Cloud Storage Integrator for
Enterprises 43
2.9.2 Data Confidentiality and Integrity
Verification Using User Authenticator
Scheme in Cloud 45
2.9.3 Secure Storage Services in Cloud 47
2.9.4 Data Confidentiality in Storage-Intensive
Cloud Applications 49
2.9.5 Cloud Storage Integrity Checking Using
Encryption Algorithm 51
2.10 Critical Analysis on Related Work Solutions 52
2.11 Contribution and Road Map of Research 56
2.12 Summary 59
3 RESEARCH METHODOLOGY 60
3.1 Introduction 60
3.2 Research Methodology 62
3.2.1 Literature Review 62
3.2.2 Analysis 64
3.2.3 Design 65
3.2.4 Implementation 66
3.2.5 Evaluation 67
3.3 Research Activities and Outcomes 68
3.4 Summary 71
4 SECURE CLOUD STORAGE MODEL 72
4.1 Introduction 72
4.2 Building Blocks of SCSM 73
4.3 Description and Architecture of SCSM 74
4.3.1 Roles and Responsibilities 76
4.4 Components of SCSM 77
4.4.1 Multi-factor Authentication and Authorization
Process 78
x
4.4.1.1 Role Based Access Control 79
4.4.1.2 Complex Random Security
Code Generator 81
4.4.2 Partial Homomorphic Cryptography 82
4.4.3 256-bit Secure Socket Layer 86
4.4.4 Service Level Agreement 87
4.4.5 Trusted Third Party Services 96
4.4.5.1 Key Management Approach 96
4.4.5.2 Data Auditing Process 98
4.5 Process of SCSM 101
4.6 Summary 103
5 IMPLEMENTATION OF THE SECURE CLOUD
STORAGE MODEL 104
5.1 Introduction 104
5.2 Software Development Process of SCSM 106
5.3 Systematic Workflow of SCSM 112
5.3.1 Data Transfer and Retrieval 113
5.3.2 Encrypted Data Processing 115
5.3.3 Verification Metadata Generation
and Secure Transfer of Parameters 117
5.3.4 Data Integrity Verification 118
5.3.5 Data Recovery 122
5.3.6 Private Key Retrieval and Data
Downloading 123
5.4 Deployment of SCSM 125
5.5 Summary 127
6 EVALUATION AND RESULTS 128
6.1 Introduction 128
6.2 Evaluation Strategy of Research 129
6.3 Evaluation and Results of SCSM Components 130
6.3.1 Qualys Web-based Evaluation Methodology 131
xi
6.3.1.1 SSL Certificate Inspection 131
6.3.1.2 Server Configuration Inspection 133
6.3.1.3 Final Score and Grade Assignment 138
6.3.2 Mathematical Evaluation 140
6.3.3 Compliance Evaluation 144
6.3.4 Security Analysis 146
6.3.5 Survey Based Evaluation 147
6.3.5.1 Structure of Survey 148
6.3.5.2 Survey Analysis for Multi-factor
Authentication and Authorization
Process 150
6.3.5.3 Survey Analysis for Service
Level Agreement 152
6.4 Evaluation of SCSM using Survey and Skipfish 156
6.5 Benchmarking of SCSM with Industry and Academia
Best Practices 161
6.5.1 Secure and Flexible Partial Homomorphic
Cryptography 165
6.5.2 Security and Privacy Guaranteeing Service
Level Agreement 167
6.5.3 Trusted, Secure and Efficient Data Auditing
Service 168
6.5.4 Trusted and Secure Key Management
Approach 170
6.5.5 Extremely Secure Multi-factor Authentication
and Authorization Process 171
6.6 Summary 173
7 CONCLUSION AND FUTURE WORK 174
7.1 Introduction 174
7.2 Contributions and Significance 175
7.3 Potential Applications of SCSM 178
7.4 Limitations and Future Directions of Research 179
xii
7.4.1 Fully Homomorphic Encryption 179
7.4.2 Heterogeneous Data 180
7.4.3 Performance 180
7.4.4 Multi-user Computing Environment 181
7.5 Summary 181
REFERENCES 182
Appendices A - C 197 - 201
xiii
LIST OF TABLES
TABLE NO. TITLE PAGE
1.1 Analysis of Research Problem Area 4
3.1 Research Activities and Outcomes 69
4.1 Service Level Agreement 89
6.1 Protocol Support Rating Guide 134
6.2 Key Exchange Rating Guide 135
6.3 Cipher Strength Rating Guide 137
6.4 Evaluation Criteria 138
6.5 Letter Grading Translation 139
6.6 Keys of Alice and Bob 141
6.7 Key Management Compliance and Auditing 144
6.8 Participation of the Industry Experts in Survey 149
6.9 Analysis of Multi-factor Authentication and
Authorization Process 151
6.10 Analysis of Service Level Agreement 155
6.11 Analysis of SCSM 157
6.12 SCSM Benchmarking with Industry and Academia
Implemented Solutions 163
xiv
LIST OF FIGURES
FIGURE NO.
TITLE
PAGE
1.1 Survey for Research Problem Area 4
1.2 Thesis Organization 12
2.1 Server Side Encryption 30
2.2 Encryption with Client’s Key 31
2.3 Client Side Encryption 32
2.4 Data Migration Process 34
2.5 Authentication Process 36
2.6 Limitations of Amazon S3 and GCS 38
2.7 Cloud Storage Integrator 44
2.8 Preserving Data Confidentiality 45
2.9 Data Integrity Verification 46
2.10 Data Updating 46
2.11 TrustStore Hybrid Cloud Service 48
2.12 Key Management and Data Confidentiality 50
2.13 Cloud Storage Security using Broker 51
2.14 Academia Implemented Cloud Storage Models 53
2.15 Research Road Map 58
3.1 Research Methodology 61
4.1 Architecture of SCSM 74
4.2 Components of SCSM 78
4.3 RBAC Privileges 80
4.4 Access Logs Report 100
4.5 Process of SCSM 102
5.1 HTTP based Authentication 106
5.2 Role Mapping 107
xv
5.3 Roles and Security Annotations 108
5.4 RSA Partial Homomorphic Cryptography 109
5.5 Metadata Generation 110
5.6 Metadata Verification 110
5.7 Sound Steganography 111
5.8 Operations of SCSM 113
5.9 Encryption Process 114
5.10 Decryption Process 115
5.11 Data Processing 116
5.12 VMD Generation and Transfer Process 117
5.13 VMD Decoding Process 119
5.14 Data Auditing Process 119
5.15 Auditing Report 120
5.16 Data Integrity Violation 121
5.17 Auditing Report After Violation 121
5.18 Data Recovery Process 122
5.19 Auditing Report after Data Recovery Process 123
5.20 Private Key Decoding Process 124
5.21 Data Retrieval Process 124
5.22 Module based Deployment Using Glassfish Server 126
6.1 Evaluation Strategy 130
6.2 Implemented SSL Certificate Details 132
6.3 SSL Certificate Inspection 133
6.4 Protocol Support 135
6.5 Key Exchange 136
6.6 Cipher Strength 138
6.7 SSL Evaluation Results 140
6.8 Results for Multi-factor Authentication and
Authorization Process 151
6.9 Results for SLA 154
6.10 Results for SCSM 158
6.11 Skipfish Security Scanning Report 159
6.12 Skipfish Interactive Report 160
xvi
6.13 Performance Analysis of Encryption Process 166
6.14 Performance Analysis of Decryption Process 166
6.15 Performance Analysis of Data Integrity Verification
Process 169
6.16 Security Experiment on CRSCG 172
7.1 Contributions, Publications and Certificates 177
xvii
LIST OF ABBREVIATIONS
ACL - Access Control List
ACM - Access Control Mechanism
ACP - Access Control Policy
AES - Advanced Encryption Standard
API - Application Programming Interface
AWS - Amazon Web Services
CA - Client’s Admin
CAT - Computer Associates Technologies
CentOS - Community Enterprise Operating System
CRC - Cyclic Redundancy Check
CRSCG - Complex Random Security Code Generator
CSA - Cloud Security Alliance
CSP - Cloud Service Provider
CSPA - Cloud Service Provider’s Admin
CSSP - Cloud Storage Service Provider
DAC - Discretionary Access Control
DBAN - Darik’s Boot and Nuke
DSA - Digital Signature Algorithm
ECC - Elliptic Curve Cryptography
EJBs - Enterprise Java Beans
FHE - Fully Homomorphic Encryption
GCS - Google Cloud Storage
GFIS - German Federal Office of Information Security
HIPAA - Health Insurance Portability and Accountability Act
HMAC - Keyed-Hash Message Authentication Code
HTML - Hypertext Markup Language
HTTPS - Hypertext Transfer Protocol Secure
xviii
IaaS - Infrastructure as a Service
IM - Integrity Management
JSF - Java Server Faces
JSP - Java Server Pages
KM - Key Management
MAC - Mandatory Access Control
MITM - Man-in-the-Middle
NAS - Network Attached Storage
NIST - National Institute of Standards and Technology
NSA - National Security Agency
OS - Operating System
PaaS - Platform as a Service
PCI - Payment Card Industry
PCIDSS - Payment Card Industry Data Security Standard
RBAC - Role-based Access Control
RSA - Rivest, Shamir and Adleman
S3 - Simple Storage Service
SaaS - Software as a Service
SCSM - Secure Cloud Storage Model
SDK - Software Development Kit
SDLC - Software Development Life Cycle
SE - Software Engineering
SHA - Secure Hash Algorithm
SLA - Service Level Agreement
SMBs - Small and Medium Businesses
SMS - Short Message Service
SQL - Structured Query Language
SSE - Server Side Encryption
SSE-C - Server Side Encryption with Customer-Provided Key
SSL - Secure Socket Layer
SSO - Single Sign-On
TCG - Trusted Computing Group
TDEA - Triple Data Encryption Algorithm
xix
TED - Trusted Extension Device
TLS - Transport Layer Security
TPM - Trusted Platform Module
TTP - Trusted Third Party
TTPA - Trusted Third Party’s Admin
TVD - Trusted Virtual Domain
UML - Unified Modelling Language
VF - Virtual Firewall
VM - Virtual Machine
VMD - Verification Metadata
VPC - Virtual Private Cloud
VPS - Virtual Private Server
vTPM - Virtual Trusted Platform Module
XHTML - Extensible Hypertext Markup Language
XML - Extensible Markup Language
XSS - Cross-site Scripting
xx
LIST OF SYMBOLS
| - Such That
d - Private Key Exponent
e - Public Key Exponent
n - Modulus for Private and Public Key
ⱷ(n) - Phi Euler’s Function
R - Random Factor
xxi
LIST OF APPENDICES
APPENDIX
TITLE
PAGE
A Papers published during the author’s
candidature
197
B Certificates obtained during the author’s
candidature
200
C Survey design and delivery 201
1
CHAPTER 1
INTRODUCTION
1.1 Overview
Cloud computing is an innovative method of delivering computing resources
(Tripathi and Mishra, 2011). It facilitates the clients to execute their enterprise
applications and store data at third party owned servers. The cloud offers various
service delivery models such as Software as a Service (SaaS), Platform as a Service
(PaaS) and Infrastructure as a Service (IaaS), which are acquired by the clients
according to their requirements (Bouayad et al., 2012). IaaS is further categorized in
three major facilities which include compute, network, and storage.
This research mainly focuses on storage sub-offering of IaaS, which is
provided to clients by well-known Cloud Service Providers (CSPs) such as Amazon,
and Google (Ghosh and Ghosh, 2012). This service facilitates the organizations to
obtain dynamic, redundant and scalable, remotely located data storage services that
can be easily scaled-up or down to avoid costly burdens of an under or over-utilized
storage capacity (Jiang et al., 2013). Cloud storage services have been very useful
for Small and Medium Businesses (SMBs) that lack capital budget to implement and
maintain personalized storage infrastructure (Sun and Sha-sha, 2011; Deyan and
Hong, 2012).
2
However, nowadays cloud storage is becoming a business interest for all size
organizations that are requiring resilient data availability, business continuity, and
disaster recovery solutions. For cloud storage clients, critical data are maintained
and backed-up by the CSP at multiple geographically distributed locations (Zhang
and Zhang, 2011).
The remainder of this chapter is organized in eight sections. Section 1.2,
describes the problem background. Section 1.3, represents the problem statement.
The objectives, scopes, significance, and contribution of research are described in
Sections 1.4, 1.5, 1.6 and 1.7, respectively. Section 1.8, illustrates and describes
anatomy of the entire thesis. Section 1.9, represents the summary of this chapter.
1.2 Problem Background
The organizations that are required to follow well-defined data security
standards such as the Health Insurance Portability and Accountability Act (HIPAA)
and Payment Card Industry Data Security Standard (PCIDSS), do not trust the
existing security techniques as well as policies offered by the CSPs (Hofmann and
Woods, 2010; Bamiah et al., 2012; Shucheng et al., 2010). Due to lack of control on
their confidential data while it is stored at cloud storages, clients are concerned that
malicious users might gain illegal access to their sensitive records (Taeho et al.,
2013).
This research focuses on solving two major issues which are emerging
concerns for organizations dealing with confidential data not to adopt cloud storage
services, these include data confidentiality and integrity breaches (Syam and
Subramanian, 2011; Gansen et al., 2010). The term data confidentiality refers to the
concept that only authorized parties or systems have the ability to access protected
3
information. The threat of data compromise increases in the cloud environment due
to augmented number of parties, devices and applications involved, which leads to an
increase in the amount of access points.
Data integrity means data can only be modified by the authorized parties. The
concept of data integrity refers to protection of data from unauthorized deletion,
modification or fabrication (Zissis and Lekkas, 2012). In order to further analyze the
research problem, this research also conducted a survey from industry and academia
based information security analysts, data auditors, cloud computing researchers,
developers, architects and security specialists. The detailed structure of the survey is
described in Chapter 6. The following question was mentioned in the survey to
determine the validity and impact of the problem background of this research.
Question: Organizations dealing with confidential data are reluctant to use
remotely located third party cloud storage services due to emerging data
confidentiality and integrity concerns.
The response scale was based on three options, i.e. Agree, Neutral and
Disagree. The survey response obtained for the research problem area, as shown in
Table 1.1 and Figure 1.1, justifies the necessity of formulating a solution for the
research problem, whereby 83.33% of respondents agreed that the organizations are
reluctant to adopt cloud storage services due to emerging data confidentiality and
integrity concerns.
4
Table 1.1: Analysis of Research Problem Area
Answer Choices Response Rate Academia Industry Total
Agree 83.33% 14 11 25
Neutral 16.67% 2 3 5
Disagree 0% 0 0 0
Figure 1.1: Survey for Research Problem Area
5
Past studies proved that confidentiality and integrity of data stored at cloud
computing storage is breached by external or internal attacks (Ling et al., 2011).
External attacks are issued by outside hackers who steal clients’ confidential records.
These attacks may take place by wicked IT personnel from the competitors of CSP or
the client. The intention of these attacks is to damage the brand reputation of CSP or
to violate the clients’ files. In order to defend against these attacks, CSPs normally
secure their physical and virtual infrastructure using various tools and techniques for
protecting clients’ data and their systems. However, existing solutions are not
adequate enough to achieve the desired target (Rocha and Correia, 2011). It is also
identified that internal employees of CSP may become malicious as well (Catteddu
and Hogben, 2009).
Internal attacks are placed by malicious insiders such as disgruntled
employees of a CSP. They intentionally exceed their privileged accesses in a
negative manner to affect the data confidentiality and integrity (Duncan et al., 2012).
In contrast to an external hacker, malicious insiders can attack the computing
infrastructure with relatively easy manner and less knowledge of hacking, since they
have a detailed description of the underlying infrastructure. Without using a
complete trustworthy solution for defending against insider attacks, malicious
insiders can easily obtain the passwords, cryptographic keys, files and gain access to
clients’ records (Rocha et al., 2011). When clients’ data confidentiality has been
breached, they would never have knowledge of the unauthorized access mostly due
to lack of control over their data and lack of transparency in the CSP’s security
practices as well as policies.
The breach of data confidentiality and integrity creates a barrier of trust
among clients and CSPs. Clients need to ensure that CSP will always provide the
agreed level of service and security to protect their confidential data. Trust is
impacted when CSPs do not meet the negotiated agreements, for example,
implementing insufficient security techniques, storing data at invalid locations which
are not permitted by the legal law or not complying with the standards such as
HIPAA or PCIDSS (Khan and Malluhi, 2010). The trust issues are normally
6
mitigated by signing a legal Service Level Agreement (SLA) and granting adequate
control to the clients on their confidential data (Xiaoyong and Junping, 2013).
However, the existing SLAs are non-negotiable and fixed from the CSPs for every
client which may be either an ordinary home user or a banking sector. These SLAs
are not able to accommodate specific requirements of the organizations who are
seeking to leverage cloud storage services for storing confidential data (Asha, 2012).
1.3 Problem Statement
As discussed in the problem background that cloud storages are vulnerable to
external and internal attacks which have impacted the clients’ trust towards CSPs for
shifting their confidential data at third party cloud storages. Existing network
security solutions are not able to overcome cloud storage data confidentiality and
integrity violating threats (Nirmala et al., 2013). Considering these issues, the
problem statement of research is mentioned as follows:
How to develop a secure cloud storage model that preserves data
confidentiality and integrity as well as ensures the delivery of trusted services to the
clients by considering their data security policies?
Several research questions can be extracted from the problem statement,
which are mentioned as follows:
i. What are the existing security models that have been designed, developed
or proposed by the industry and academia researchers to overcome data
confidentiality and integrity concerns for using cloud storage services?
7
ii. What are the limitations of existing industry and academia implemented
cloud storage models that raise confidentiality and integrity issues which
prevent organizations dealing with sensitive data from adopting cloud
storage services?
iii. How to design a model that preserves data confidentiality and integrity at
cloud storages as well as ensures the delivery of trusted services to the
clients?
iv. How to develop a model that enables the clients to store and process their
data at cloud storages with consistent data integrity, confidentiality and
trust?
v. How to verify that the implemented cloud storage model is successful in
preserving the confidentiality and integrity of sensitive data, and ensuring
the delivery of trusted services to the clients?
1.4 Research Objectives
The aim of this research is to develop a security model that overcomes the
data confidentiality and integrity concerns for using cloud storage services as well as
for ensuring the delivery of trusted services to the clients by considering their data
security policies. The targeted aim will be achieved by completing the following
research objectives:
8
i. To investigate and obtain in-depth understanding of existing security
models that have been proposed by the industry and academia researchers
to overcome data confidentiality and integrity concerns for using cloud
storage services.
ii. To critically analyze as well as explain the limitations or gaps which have
been identified in the existing industry and academia implemented secure
cloud storage models.
iii. To design an improved and enhanced secure cloud storage model which
preserves data confidentiality and integrity, as well as ensures the delivery
of trusted services to the clients by considering their data security
policies.
iv. To implement and deploy a web-based prototype on a cloud computing
infrastructure which facilitates the clients to store and process their data at
cloud storages with consistent data confidentiality, integrity and trust
assurance.
v. To evaluate the developed cloud storage model in order to ensure that it
overcomes or mitigates the data confidentiality and integrity concerns,
and gains the trust of organizations dealing with sensitive data to adopt
cloud storage services.
9
1.5 Scope of Research
Cloud reference architecture consists of three service delivery (SaaS, PaaS,
and IaaS) and four deployment models (Public, Private, Hybrid, and Community)
(Mell and Grance, 2011). Since cloud computing is a vast area of research, this
study only focuses on IaaS. Furthermore, IaaS providers offer compute, network and
storage services to the clients. This research considers security of a cloud storage
that resides at data center of a CSP. Security has several perspectives when it comes
to research and development. This research considers confidentiality and integrity
parameters of security as the major problems to be solved. This research assumed
that breach of data confidentiality and integrity will impact the clients’ trust for using
cloud storage services. In order to achieve clients’ trust, data confidentiality and
integrity must be protected, and CSP must always ensure the delivery of trusted
cloud storage services to the clients. Therefore, in this thesis, trust do not refers to
the concept of trusted computing.
However, this research assumed that users may be required to use trusted
platforms for using cloud storage services. For example, Trusted Extension Device
(TED) and Trusted Platform Module (TPM) can be used by the clients to protect
their devices. In a cloud computing environment, system performance is also
considered as a significant factor, but SCSM was designed and developed mainly by
considering the security requirements of the organizations dealing with highly
confidential data. We believe that the identified research problem was not possible
to be solved just by providing encryption and data auditing approaches. Therefore,
our research scope focuses on providing a complete secure process that is comprised
of a set of five components which include Multi-factor authentication and
authorization process using Role-Based Access Control (RBAC) with Complex
Random Security Code Generator (CRSCG), Partial homomorphic cryptography,
Trusted Third Party (TTP) services including Key Management (KM) approach and
data auditing process, implementation of 256-bit Secure Socket Layer (SSL) and
SLA. This research also focuses on the deployment of the research contribution
10
Secure Cloud Storage Model (SCSM) on a cloud computing infrastructure in order to
obtain authentic evaluation results.
1.6 Significance of Research
When objectives of the research are successfully accomplished, the
development of SCSM can be considered as one of the valuable contributions in the
field of cloud computing security, since it will overcome the existing data
confidentiality and integrity concerns by providing trusted and secure cloud storage
services to the clients. Contribution of this research will be beneficial for both, client
organizations and CSPs. Clients will adopt cost-effective storage solutions in order
to store their confidential data for high availability, accessibility, secure backup and
recovery. Alternatively, CSPs will adopt this solution to overcome the limitations of
their existing cloud storage services and to gain clients’ trust. This research expects
that adoption of cloud storage service will rapidly increase with the successful
implementation and deployment of SCSM at the industry level.
1.7 Contribution of Research
The advent of cloud computing brought up enormous challenges for the
software engineers to design as well as develop secure cloud applications, platforms,
and infrastructures that deal with the storage of mission critical data. In the domain
of Software Engineering (SE), information security engineers apply security
principles at each stage of the Software Development Life Cycle (SDLC) from
requirements analysis until development and deployment phases. They are also
responsible to analyze and test the security of their developed cloud based solutions
11
(Zingham and Saqib, 2013). This research adopted a SE approach by designing,
developing, deploying and analyzing the requirements of secure cloud storages.
Therefore, this research contributed in the field of SE by completing those
requirements which actually fall under the responsibilities of information security
engineers for developing secure cloud storage services. The final contribution
produced by this research as software will introduce a novel SE approach to develop
complex confidentiality and integrity preserved cloud storage systems.
1.8 Thesis Organization
This thesis explores an emerging area of cloud security research focusing on
data confidentiality and integrity concerns for using cloud storage services. The
complete research is organized in seven chapters. Figure 1.2, shows the flow of
thesis organization. Chapter 1 represents the significance of this research mainly by
clarifying the research problem area, scope, contributions and objectives. An in-
depth analysis of existing literature is provided in Chapter 2, which covers cloud
security techniques and models provided by various researchers to solve the existing
cloud storage security problems. Chapter 2 also covers the critical analysis on the
limitations and strengths of industry and academia implemented contributions.
Chapter 3 describes the entire research methodology used systematically for
accomplishing each research objective. Description and design of the SCSM are
provided in Chapter 4. Each component of SCSM is discussed with technical as well
as theoretical details. SCSM is designed using architecture, use-case and sequence
diagram, in-addition to the construction of an effective SLA. Chapter 5 describes the
development details of SCSM implementation as a web-based prototype. Entire
system workflow is described using user interface snapshots. System deployment
details at the real cloud computing infrastructure are also described in Chapter 5.
The evaluation process and results for the entire process as well as the each
component of SCSM are described in Chapter 6. The applications of SCSM, overall
12
research conclusion, limitations and future direction, are critically discussed and
justified in Chapter 7.
Figure 1.2: Thesis Organization
CHAPTER 2
Literature Review
CHAPTER 3
Research
Methodology
CHAPTER 7
Conclusion and Future
Work
CHAPTER 1
Introduction
CHAPTER 6
Evaluation and
Results
CHAPTER 4
Secure Cloud Storage Model
CHAPTER 5
Implementation of the Secure
Cloud Storage Model
13
1.9 Summary
Cloud storage service is sub-category of IaaS which is provided to
organizations for storing large amounts of data with unlimited capacity, broad
accessibility, resilient availability, disaster recovery, and cost-effectiveness features.
However, organizations dealing with confidential data are reluctant to adopt remotely
located cloud storage services due to emerging data confidentiality and integrity
concerns which have created a barrier of trust among the CSPs and clients. In order
to overcome the mentioned problem, this research aims to provide an improved as
well as enhanced solution for designing as well as developing confidentiality and
integrity preserved secure model to use cloud storage services by accomplishing the
research objectives. The successful implementation as well as deployment of SCSM
at the industry level will assist CSPs to adopt this solution for offering secure and
trusted cloud storage services to the business organizations.
14
CHAPTER 2
LITERATURE REVIEW
2.1 Introduction
National Institute of Standards and Technology (NIST) defines cloud
computing as “A model for enabling ubiquitous, convenient, on-demand network
access to a shared pool of configurable computing resources e.g., networks, servers,
storage, applications, and services, that can be rapidly provisioned and released
with minimal management effort or service provider interaction” (Mell and Grance,
2011). A cloud model is composed of five essential characteristics defined as
follows:
i. On-demand Self-service: Consumers can automatically provision the
computing capabilities such as server time and network storage
without human interaction.
ii. Broad Network Access: Services are provided to a large community
of users over the internet for ubiquitous and pervasive access via web-
browser.
15
iii. Resource Pooling: CSPs pool their resources to server multiple
clients by developing a multi-tenant architecture using virtualization
tools and technologies.
iv. Rapid Elasticity: Clients can easily scale up and down their service
capabilities by requesting the CSP without being engaged in physical
efforts.
v. Measured Service: Usage of resources such as storage, bandwidth,
and processing, are automatically monitored, controlled, reported and
optimized to provide efficient services. (Mell and Grance, 2011)
The remainder of this chapter is organized in eleven sections. Section 2.2,
describes the cloud deployment models. Section 2.3, describes the cloud service
delivery models. Section 2.4, describes the cloud storage services, its concepts,
advantages and the adopting organizations. Data security concerns for using cloud
storage services, and the protection mechanisms used to overcome those concerns are
described in Sections 2.5 and 2.6, respectively. Section 2.7, describes the leading
industry implemented cloud storage services. Section 2.8, identifies and analyses the
limitations or vulnerabilities of industry implemented cloud storage services. Section
2.9, presents the related work of various researchers who designed or developed
security models to overcome confidentiality and integrity concerns for using cloud
storage services. Section 2.10, critically analyses the reviewed research
contributions and determines their limitations. Section 2.11, describes the strengths
of related work and complete roadmap of this research. Section 2.12, presents the
summary of this chapter.
16
2.2 Cloud Deployment Models
Cloud services are mainly provided via Private, Public, Hybrid, and
Community cloud deployment models. Adoption of these models depends on the
security, privacy, performance, flexibility and scalability requirements of an
organization (Keung and Kwok, 2012; Soni et al., 2013). However, if one model
does not well-suit the demands of an organization, they can also mix-up among
multiple models which is considered as a hybrid cloud deployment model (Savu,
2011). Cloud deployment models are described in the following sub-sections.
2.2.1 Public Cloud
Public cloud infrastructure is managed and operated by the CSP. It is offered
to a wide range of registered users, and it is configured as well as settled at on-
premises of the CSP. Resources such as storage, application or servers are provided
to clients without requiring them to build, maintain or monitor a personalized IT
infrastructure (Wang et al., 2013). They are also not required to pay for the software
licensing or the hardware purchasing cost. Using this deployment model, everything
(SaaS, PaaS and IaaS) is provided from the CSPs such as Amazon, Google and
Microsoft (Ang et al., 2011). Clients can easily access the public cloud services over
the internet using a web-browser. They are only required to pay according to the
usage of their services, and resources are highly flexible, which can be easily scaled
up and down at any time with minimal interaction, hence it saves organizations from
paying for the complexities of managing an over or under-utilized computing
infrastructure. Users of public cloud also enjoy the benefits of resilient availability.
For instance, Amazon guarantees 99.99% service uptime (Amazon, 2013). However,
using public cloud, clients do not gain direct control over their data or services and
there is no transparency of cloud security standards, techniques, policies or
procedures for the client to trust the public cloud services (Huaqun, 2013) (Astrova
et al., 2012).
17
2.2.2 Private Cloud
A private cloud infrastructure is managed and operated by the client
organization, CSP or in certain cases both. Unlike public, the private cloud belongs
exclusively to a single client (Stipic and Bronzin, 2012). A business organization
such as education, banking or healthcare sector, can use their existing in-house
infrastructure to build a cost-effective private cloud using the supporting software
stack which includes Eucalyptus or OpenStack (Loewen et al., 2013) (Baun and
Kunze, 2009). Alternatively, if an organization does not own an on-premises
infrastructure or they lack supporting IT staff to build a private cloud, they can also
acquire it from the CSP. Normally the private cloud offered by CSPs is the Virtual
Private Cloud (VPC).
A VPC is an on-demand configurable pool of shared computing resources
allocated within a public cloud environment. However, it is isolated from other parts
of a public cloud infrastructure using encrypted communication channels, Access
Control Mechanisms (ACMs), Trusted Virtual Domain (TVD), and Virtual Firewall
(VF) (Mishra et al., 2013). Using a private cloud, clients have a greater degree of
control on their data and services since data reside at on-premises of clients’
organization and resources are controlled as well operated by internal IT staff, but
private cloud is less scalable compared to public or VPC. Alternatively, a VPC is
secure compared to a public cloud and client has minimal control over their acquired
space, but since it is off-premises cloud service for the client organizations, they still
feel lack of direct control over their personal data and computing resources (Dillon et
al., 2010).
18
2.2.3 Hybrid Cloud
Hybrid cloud is the combination of multiple distinct cloud models (Private,
Public or Community) to create a customized solution based on the requirements of
an organization (Mell and Grance, 2011). This provides an opportunity for the
organizations to store sensitive information and mission critical processes in a private
cloud, and non-critical information as well as processes in a public cloud or to use
different cloud models for backup and disaster recovery (Yen-Hung et al., 2013). A
hybrid cloud solution is also useful, when an organization has started a new business
and they require certain systems to serve their customers, but they do not possess an
existing IT infrastructure as well as technical IT staff to deploy, monitor and manage
their systems.
They might decide to handle these tasks to a third party CSP by leveraging
public cloud services without investing in building an in-house infrastructure, as a
result, they will gain a cost-effective IT solution and the opportunity to focus on their
core business. However, with the passage of time, as their customers’ growth rate
increases, they will decide to initiate their own IT department and build on-premises
private cloud as a long term business strategy. The organization will host their
internal systems at on-premises infrastructure but still continue to use SaaS and
storage from CSP, in this case their computing infrastructure will be considered as a
hybrid cloud (Marston et al., 2011). The security of a hybrid cloud depends on the
type of cloud models which are used as mixed and the approach of adoption such as
on or off-premises.
19
2.2.4 Community Cloud
A community cloud infrastructure is provisioned for exclusive use by a
specific group of organizations. Members of community cloud may belong to a
combination of hospitals, banks or universities that have a common mission, security
requirements, policy, and compliance considerations (CSA, 2011). It may be
managed and operated by one or more entities in the community, a third party CSP
or combination of them, and it may exists on or off-premises (Jadeja and Modi,
2012). In community cloud, organizations have ubiquitous access to shared
information. For example, using a healthcare community cloud, one hospital can
access the shared records of patients from another hospital.
A community cloud is smaller than public but larger than a private cloud.
Since it is constructed by several organizations collaborating together to achieve
economies of scale (Gall et al., 2013; Sattiraju et al., 2013). Using this model,
organizations have complete control over their management responsibilities as they
can control, which other organizations are permitted to join the community
infrastructure. Organizations also have enough control over their resource and they
trust the security standards used, especially when it is located at an on-premises
datacentre. Organizations are also able to manage their data sharing policies.
However, allocation of cost, responsibilities, governance, security and controlling
multiple user access points are the main challenges to be faced while acquiring a
community cloud (Sathiyapriya et al., 2013).
20
2.3 Cloud Service Delivery Models
CSPs offer variety of services such as remotely located business processing
applications, integration and development tools, flexible and scalable storages to the
clients. These services are obtained by the clients using SaaS, PaaS, and IaaS, which
are delivered via private, public, hybrid, and community deployment models with
minimal interaction as well as management responsibilities. Cloud service delivery
models are described in the following sub-sections.
2.3.1 Software as a Service
SaaS refers to web-based applications such as word-processing, human
resource and customer relationship management systems, running on the CSP’s
infrastructure. These applications are accessed using a web-browser over the internet
(Junjie et al., 2009). Users of SaaS are charged according to pay-as-you-go billing
model and they do not manage or control the underlying cloud infrastructure besides
having limited permissions for application configuration settings. SaaS applications
are gaining rapid popularity and they are provided by the well-known companies
such as Microsoft and Salesforce. SaaS bear advantages for clients and providers,
the key benefits for providers include rapid deployment, better user adoption, and
reduced support needs. For customers, key benefits of SaaS include lower IT cost
and faster access to new technology, functionality and upgrades (Pang Xiong and Li,
2013). However, due to clients’ lack of control on their enterprise applications,
security and privacy are the top concerns preventing firms from adopting SaaS (Yu-
Hui, 2011).
21
2.3.2 Platform as a Service
PaaS facilitates the clients to rapidly develop, deploy and test applications on
a cloud infrastructure using Application Programming Interfaces (APIs), libraries,
and tools, provided by the CSPs without buying or maintaining the underlying
infrastructure (Zeng and Xu, 2010). PaaS clients have full control over their
deployed applications and adequate control over the application hosting platforms.
PaaS tools can play an important role in software development stages. Nowadays,
there is a variety of PaaS tools offered by the CSPs. These include Cloud Foundry,
Azure, OpenShift, Google App Engine, AppFog, Cloudify and Heroku etc. PaaS
offerings provide plenty of features that will appeal to any developer working in a
cloud environment. However, there are some challenges faced in adoption of PaaS
mainly due to the required learning curve, since developers are not ready to use these
new tools beside the traditional ones they are already familiar with (Cohen, 2013).
2.3.3 Infrastructure as a Service
IaaS is a method of delivering resources such as servers, storage, network and
operating systems (OSs) as on-demand services (Xing et al., 2012). IaaS clients are
not required to build private datacentres, maintain servers, storage or networks, also
they do not manage or control the underlying cloud infrastructure but they have
control over their OSs, storage, deployed applications and possibly limited control of
selected networking components e.g., host firewalls (Mell and Grance, 2011; CSA,
2011). IaaS is offered from companies such as Amazon, Rackspace, HP and IBM.
Business organizations can achieve high availability, reliability and disaster recovery
solutions by leveraging IaaS. However, security policies, governance and lack of
awareness about the physical location of data remain the main barriers in adopting
IaaS (Gibson et al., 2012).
22
2.4 Cloud Storage Services
Cloud storage services are offered via public cloud deployment models and
they do not require setup, configuration or installation of a personalized IT
infrastructure at the client organizations. These services are remotely acquired from
a third party CSP such as Amazon, and Google at any time with on-demand
unlimited capacity. For example, Amazon Simple Storage Service (S3) offers data
storage that is easily scalable without physical interaction which can shrink and grow
as per clients’ requirements and it does not require capital investment. Users are
normally charged according to pay-per-use billing model (Mazhelis et al., 2012).
Beside the cost-effectiveness, ease-of-use and accessibility, cloud storage services
also involve resilient availability and disaster recovery solutions for the organizations
(Ullrich et al., 2012). Typically, CSPs utilize cost-effective redundant storage
hardware which overcomes the issues of interrupted service during a planned or
accidental outage, i.e. the scheduled maintenance or upgrades. Amazon’s claim for
providing 99.99% uptime is visible proof of resilient cloud storage service
availability (Amazon, 2013).
Organizations that are not dealing with sensitive data, are rapidly adopting
cloud storage services for disaster recovery solutions. Under some unwanted
circumstances such as natural disasters, they can easily recover their data from the
backup storage without loss or damage to the information. Normally, CSPs create
redundant backup for the clients’ data at various backup zones by considering the
local data protection law. Due to these backup zones, clients’ data will be always
protected and returned as intact. In case of a disaster, IT staff from a CSP can restore
the data back to cloud from local storage, which may reside at geographically located
backup zones (Javaraiah, 2011). However, compared to traditional storage methods,
cloud storages possess new challenges in data security, reliability, and management.
CSPs use numbers of heterogeneous storage devices which work together to provide
data storage and business functions. While using cloud storage services, clients are
not aware about the details of security controls used to protect their data (Zhang and
Zhang, 2011).
23
Cloud storages are yet not being adopted by the organizations dealing with the
confidential data due to emerging data confidentiality and integrity problems (Chirag
et al., 2012). In order to enhance adoption of cloud storage services by all type
business organizations, there is tremendous opportunity for this research to formulate
a valuable contribution to overcome data confidentiality and integrity concerns.
2.5 Cloud Storage Data Security Concerns
Data stored in third party cloud storages might not be secure due to the
absence of ensuring confidentiality and integrity preserved services. Although, CSPs
are offering cost-effective cloud storage services, since it is a remotely located
facility, clients cannot trust the CSP or feel satisfied that their data are always secure
at the cloud. Hence, organizations dealing with confidential data are not willing to
adopt cloud storage services until certain guarantees are achieved (Karumanchi,
2010). Information confidentiality and integrity concerns for using remotely located
public cloud storage services are described in the following sub-sections.
2.5.1 Data Confidentiality
In a cloud computing environment, confidentiality is a key concern when data
are of sensitive nature that includes information related to health, political opinions,
religious and personal beliefs. The organizations such as banking, healthcare and
Payment Card Industry (PCI) have strict data confidentiality requirements, so they
will have numerous anxieties for storing their sensitive records at a remotely located
third party owned cloud storages (Hofmann and Woods, 2010).
24
Since by acquiring and using cloud storage services, clients lose direct control
over their data, it will open new opportunities for illegal authorities to access their
personal records. Confidentiality of data can be breached in-transit or while it is
stored at the cloud storage. When clients send data to the cloud, it may be attacked
by Man-in-the-Middle (MITM) or personal records can be viewed by the CSP which
will lead to a breach of confidentiality. Encryption is recommended as the
fundamental approach to preserve data confidentiality while using cloud storages
(Karumanchi, 2010). However, implementation of improper security procedures
such as ineffective KM approaches may result into a vulnerable and insecure cipher.
Data of a cloud user must be accessed only by the authorized parties as specified in
the SLA. Alternatively, CSP and other involved organizations such as TTP must not
exceed their privileges to view or distribute confidential data of clients to
unauthorized parties.
2.5.2 Data Integrity
Although data confidentiality at cloud storage can be preserved by
implementing encryption with effective KM approaches, however, these techniques
cannot guarantee that the integrity of data will always remain intact. Data integrity
can be breached when data are modified by an external hacker or a malicious insider
even when it is in cipher format. In order to defend data integrity violations, security
approaches such as ACMs and data auditing services should be implemented. Data
owners must hire a TTP to monitor their data on cloud and to ensure that their
sensitive records are protected from integrity violations (Stamou et al., 2012). TTP
can verify the integrity of clients’ data using applications which are based on the
implementations of Digital Signature Algorithm (DSA). CSPs can also play an
integral role in sustaining a safe computing environment for storing clients’
confidential data by enforcing ACMs which are specified from the data owners
considering their data security policies (Cong et al., 2013). Clients’ trust on cloud is
25
breached when the SLA, terms or conditions are not followed or accomplished
(Zissis and Lekkas, 2012).
2.6 Data Protection Mechanisms for Cloud Storages
In order to overcome the issues of breaching data confidentiality and integrity
in a cloud computing environment, researchers have designed and developed security
models using various data protection mechanisms which include cryptography and
KM, trusted computing, ACMs, SLA, and data auditing services. The significances
of these data protection mechanisms to secure cloud storage services are described in
the following sub-sections.
2.6.1 Cryptography and Key Management
The protection of data against the loss and theft is a shared responsibility of
cloud customer and CSP. Nowadays, encryption is one of the strongly recommended
techniques specified in cloud SLAs (Jansen and Grance, 2011). However, simply
encryption is not enough to secure the data. There must be proper KM practices to
ensure safe and legal access of encryption keys. Keys must be protected with the
same significance as the data itself and they should be accessed only by limited and
authorized personalities. Proper procedures must be followed, if encryption keys are
lost or stolen (CSA, 2011).
26
Triple Data Encryption Algorithm (TDEA) and Advanced Encryption
Standard (AES), are vastly implemented symmetric algorithms used for protecting
data at cloud storages. These algorithms use a single secret key for performing
encryption and decryption processes. From asymmetric cryptography, Rivest,
Shamir and Adleman (RSA) cryptography, and Elliptic Curve Cryptography (ECC)
are mostly used encryption techniques to protect data at cloud. Unlike symmetric,
these methods use two different keys, a public key for encryption and a private key
for decryption (Jing-Jang et al., 2011). If encryption practices are followed
accurately, data will be protected from illegal access or theft by malicious employees
of a CSP and external adversaries.
2.6.2 Trusted Computing
Trusted computing is a term that refers to technology and proposals for
resolving computer security problems through hardware augmentations and related
software amendments. Several well-known hardware manufacturers and software
companies, jointly known as the Trusted Computing Group (TCG), are working for
trusted computing and they have come up with significant developments to enhance
computing security. TCG developed a set of hardware and software technologies to
enable the construction of trusted platforms (Karumanchi, 2010). Trusted computing
is a promising technology to mitigate the novel security challenges existing in cloud
computing infrastructures using TCG products. The use of approaches such as TPM,
and Virtual Trusted Platform Module (vTPM) can significantly improve the security
of cloud services. For example, TPM can be used by the CSPs to secure cloud
storage servers. In order to protect client Virtual Machines (VMs) that are residing at
a remote cloud platform, each VM can be associated with a vTPM instance which
emulates the TPM functionality to extend the chain of trust from physical TPM.
Similarly, TED (a portable device containing the functionality of TPM) which was
introduced by Nepal et al., (2007), can be used by the clients for remote platform
attestation and integrity verification tasks.
27
2.6.3 Access Control Mechanisms
Access control is the process of limiting system access to only authorized
people, programs, processes or other system components. ACMs are responsible for
protecting cloud storage by limiting, denying or restricting access to a system or an
entity according to the well-defined security policies (Afoulk et al., 2012). Most
common ACMs used in a cloud computing environment include Mandatory Access
Control (MAC), Discretionary Access Control (DAC), and RBAC (Wei et al., 2012).
All these techniques are known as identity based ACMs, as user subjects and
resources objects are identified by unique names. Identification may be done directly
or through roles assigned to subjects. ACMs guarantee integrity and confidentiality
of the resources. ACMs must be implemented by the CSP or third party in
association with the cloud user (Khan, 2012).
2.6.4 Service Level Agreement
SLA is a formal document which contains the terms and condition for using
cloud services. It documents common understandings about priorities,
responsibilities, and guarantees. The main objective of SLA is to reduce key areas of
potential conflicts and to identify their resolution before they get materialized. Each
CSP offers different SLA structure, service offerings, negotiation opportunities and
performance levels. SLA can be used to select a CSP on the basis of data protection,
continuity, and cost. In order to avoid unwanted situations, client and the CSP
should involve a TTP who can monitor the provided service and undertake necessary
steps independently by considering the specified terms and conditions in cases of
service violation (Stamou et al., 2012). TTP will ensure the delivery of required
service from CSP using metrics such as throughput, response, availability, reliability,
and data security controls (Ghosh and Ghosh, 2012). However, an SLA should not
28
only validate and penalized the CSP but it must also involve the possible penalties
for malicious client activities (Kandukuri et al., 2009).
2.6.5 Data Auditing Services
When clients transfer their confidential data to cloud storages, they lose
physical possession over it, which raises serious concerns of data integrity protection,
and makes it a challenging task. While using cloud storage services, users should be
able to access and process their data in the similar manner as they do on their
personalized system without worrying about the requirements of verifying its
integrity. Hence, facilitating public auditability for cloud storage is a vital task to
ensure protection of data residing at third party storages. CSPs should negotiate with
clients to allow a third party auditor for checking the integrity of outsourced data
(Nithiavathy, 2013). However, the auditing process should not bring new
vulnerabilities towards users’ data confidentiality. The process of data verification is
very significant while using cloud storages, since there are possible chances of
threats from external hackers and malicious insiders such as disgruntled employees
from a CSP. They may illegally or intentionally delete or modify the clients’
records. The third party auditor can periodically conduct the auditing services on
behalf of clients to ensure that their data are stored with the maintained integrity. In
cases of integrity violation, auditor should report to client and take appropriate steps
as specified in the SLA (Cong et al., 2013).
29
2.7 Industry Based Implementations of Cloud Storage Services
The demand for adopting cloud storage services is rapidly increasing due to its
advantages such as cost-effectiveness, scalability, backup, and disaster recovery.
However, the clients dealing with mission critical data are reluctant to move their
sensitive records at external third-party owned infrastructures due to data security
concerns (Taeho et al., 2013; Syam and Subramanian, 2011; Gansen et al., 2010).
Numerous industries are developing security models for cloud storages to preserve
data confidentiality and integrity. This research focused on reviewing and analyzing
the security of cloud storage solutions developed by Amazon and Google since they
are the leading cloud storage providers, and they are quite transparent when it comes
to the discussion of their security mechanisms and approaches used for protecting
clients’ data at cloud storages (Eric, 2013). The cloud security mechanisms as well
as approaches used by Amazon and Google are described in following sub-sections.
2.7.1 Amazon Simple Storage Service
Amazon S3 is scalable, reliable and low-latency data storage infrastructure. It
supports a simple web services interface which is used to store and retrieve unlimited
data ubiquitously and pervasively. Users of S3 are required to follow pay-per-use
billing model, and the provided service is efficiently flexible. Amazon claims to
consider S3 as a highly durable storage infrastructure designed for mission critical
and primary data storage (Amazon, 2014). S3 supports user authentication process
for controlling access to confidential data. Considering the data security approaches
of S3, only data owners have access to their personal resources. The clients can use
ACMs such as bucket policies and Access Control List (ACL) to selectively grant
permissions to the users and groups. Amazon further strengthens the authentication
process for their customers by adding an extra layer of security to the system. The
users are required to provide a six digit single-use code in addition to their standard
30
username and password credentials before access is granted to their services and
resources. Customers retrieve this code from an authentication device which they
keep in their physical possession. This process is called multi-factor authentication
because two different factors are checked before the access is granted (Amazon,
2011).
Figure 2.1: Server Side Encryption
(Jeff, 2011)
Furthermore, data is protected in-transit during upload and download
operations, as well as at rest. Clients can protect their data in transit using 256-bit
SSL, and data at rest can be protected using Server Side Encryption (SSE) or the
Server Side Encryption with Customer-Provided Key (SSE-C). The users of S3 can
31
use third party libraries to encrypt the data before storing it to the cloud storage.
Selecting an encryption method is based on preferences and requirements of the
clients (Amazon, 2014). Both SSE and SSE-C are used to encrypt data at rest, and
they are based on using 256-bit AES algorithm. S3 encrypts the data when it is
written to the disk and decrypts it when a client submits a request to access it, as
shown in Figure 2.1.
Using SSE, a client will not realize any difference in accessing the encrypted
data as long as S3 authenticates the client and validate his/her access privileges.
Whereas, using SSE-C, client provides the encryption key to the server as a part of
the request. S3 performs encryption when data is written to the disk, and performs
decryption, when client accesses the objects. Therefore, clients are not required to
maintain any source code to encrypt or decrypt data manually. The only task they
require is to manage the encryption keys. When a client uploads an object, S3 uses
the provided encryption key to apply AES-256 encryption on data as shown in Figure
2.2.
Figure 2.2: Encryption with Client’s Key
(Jeff, 2014)
32
S3 does not store the client’s provided encryption key. Instead, it stores a
randomly salted Keyed-Hash Message Authentication Code (HMAC) value of the
encryption key in order to validate the future requests. The salted HMAC value
cannot be used to derive value of the encryption key or to decrypt the contents of an
encrypted object, therefore if client loses the encryption key, the cipher data will be
useless. Unlike SSE, using S3, clients can use third party libraries or Amazon
Software Development Kit (SDK) to encrypt their data using client side encryption
before sending it to the cloud storage. Clients can apply any encryption algorithm to
their data. The private keys of the client and unencrypted data are never sent to S3.
Therefore, it is mandatory for the clients to safely manage their keys. For client side
encryption, the Amazon Web Services (AWS) SDK uses a process called envelope
encryption. Using this technique, client provides the encryption key to S3 encryption
module, and S3 performs the entire encryption process, as shown in Figure 2.3.
Figure 2.3: Client Side Encryption
(Jeff, 2011a)
33
Initially, during the encryption process, the S3 encryption module generates a
one-time-use 256-AES symmetric key known as the envelope key, S3 uses this key
to encrypt client’s data, and then encrypts the envelope key using client’s private key
(Amazon, 2014). Client uploads the encrypted envelope key along with the
encrypted data to S3. During the retrieving and decrypting process, S3 retrieves
client’s encrypted data from the server along with the encrypted envelope key. S3
then decrypts the envelope key using client’s private encryption key, and finally it
decrypts client’s data using the envelope key. In this process, if encryption keys are
lost, clients will not be able to decrypt their data. Beside these encryption and
decryption approaches, S3 uses integrity verification techniques to verify the
correctness of data stored at the cloud. It uses a combination of checksums and
Cyclic Redundancy Checks (CRCs) to detect data corruption. S3 performs
checksums on data at rest and repairs any corruption using redundant data.
Moreover, S3 server also calculates checksums on the entire network traffic to detect
corruption of data packets when storing or retrieving data. The data protection
approaches of Amazon are also in compliance with its associated SLA which mainly
specifies the service commitments regarding durability and availability of service
(Amazon, 2014).
2.7.2 Google Cloud Storage
The data storage process can be time-consuming and costly since it includes
maintaining data servers, storage disks, firewalls, backup copies and disaster
recovery provisions. Google Cloud Storage (GSC) reduces these burdens on
individuals as well as organization, allowing them to store, retrieve, share, and
analyze their data without worrying about maintenance, hardware and firmware
upgrades. It is massively scalable, users can store and process terabytes of data to
support big data scenarios required by scientific, financial analysis and media
applications or they can store small amounts of data required for light business
websites. GCS is elastic, so users can design applications for a large global
34
audience, and scale those applications as desired. Users are required to pay only for
what and when they use. It supports a simple programming interface which enables
the developers to take advantage of Google’s own reliable and fast network
infrastructure to perform data operations in a secure and cost-effective manner
(Google, 2012). The GCS users can move their data from Amazon S3 to the google
data storage through migration pipeline as shown in Figure 2.4.
Figure 2.4: Data Migration Process
(Google, 2013)
35
Beside these advantages of GCS, it also involves strict security policies and
approaches to protect confidential data of the users from external as well as internal
threats such as disgruntled employees. Google requires the use of a unique user
identification number for each employee. It is used to identify each person’s activity
on Google’s network, including any access to data of employees or customers.
During the hiring process, an employee is assigned an identification number by
Human Resources (HR) system and granted with default set of privileges. At the end
of a person’s employment, his/her account’s access to Google’s network is disabled
from the HR system. Google makes widespread use of two-factor authentication
mechanisms, such as certificates and one-time password generators. Two-factor
authentication is required for accessing the production environments and resources
through Google’s Single Sign-On (SSO) system. Access rights and levels are based
on an employee’s job function and role using the least-privileges concept (Google,
2012a).
GCS also uses ACLs to manage access to objects and buckets. ACLs are the
mechanisms that customers use to share objects with other users and allow them to
access their buckets as well as objects. An ACL consists of one or more entries,
where each entry grants permissions to a scope. Permissions define the actions that
can be performed against an object or bucket for example read or write tasks,
whereas the scope defines who the permission applies to i.e. a specific user or group
of users. When a user requests access to an object or bucket, the GCS system reads
the ACL on the object or bucket and determines whether to allow or reject the
request. If ACL grants the user permission for the requested operation, the task is
performed. Users are able to share data with authorized colleagues and partners
through ACLs and they can control the access to their confidential records. If ACL
does not grant the user permission for the requested operation, the request will fail
and a forbidden error or access denied message will be returned (Google, 2012a).
GCS users are also able to grant access of their objects to the users who do not have
GCS accounts as these types of users can access the objects by authenticating via
simple Google accounts. For example, in order to permit a user Jane to download an
object from a bucket, the object owner is first required to grant Jane read permission
36
for that object and then to provide her the resource link. When Jane opens the link in
her browser she will be automatically prompted to sign-in to her Google account.
After she is authenticated, and her browser has acquired a cookie with an
encapsulated identity token, she will be redirected to the object in GCS repository.
GCS then verifies that Jane is allowed to read the object, and then object is
downloaded to Jane’s computer (Google, 2014). This process of authorization is
shown in Figure 2.5.
Figure 2.5: Authentication Process
(Google, 2014)
GCS is based on the use of Hypertext Transfer Protocol Secure (HTTPS) by
configuring 256-bit SSL to establish a secure communication channel. Information
sent via HTTPS is encrypted from the time it leaves GCS until it is received by the
recipient’s computer. GCS automatically encrypts all data before it is written to disk,
37
at no additional charge. There is no setup or configuration required, and no need to
modify the way customers access their services. Data is automatically and
transparently decrypted when read by an authorized user. If customers require
encryption for their data, this functionality frees them from the hassle and risk of
managing personal encryption and decryption keys. GCS manages the cryptographic
keys on behalf of customers using the same hardened key management systems that
Google uses for their own encrypted data, including strict key access controls and
auditing. Each cloud storage object’s data and metadata is encrypted using the 128-
bit AES, and each encryption key is encrypted with a regularly rotated set of master
keys. Users of GCS are also able encrypt their data themselves prior to sending it for
storage, in this case users will be responsible to manage their own encryption and
decryption keys (Ferreira, 2013). GCS provides a CRC header that allows the clients
to verify the integrity of object contents. For non-composite objects, GCS also
provides a message digest header in order to allow the clients to verify integrity of
the objects, but for composite objects only CRC is available. Integrity checks are
automatically performed on all uploads and downloads. Likewise S3, the service
provided to the GCS customers will be operational and available by considering the
compliance with SLA. If Google does not comply with the requirements of SLA,
and if customers meet their obligations as specified in SLA, customers will be
eligible to receive service credit. Google also ensures that retired disks containing
customers’ old information are subjected to data destruction process prior to leaving
the premises (Google, 2012a).
2.8 Limitations of Industry Implemented Cloud Storage Services
Although cloud storage services have several advantages for business
organizations as well individuals, but after conducting in-depth literature review, this
research identified that well-known cloud storages such as S3 and GCS have certain
vulnerabilities or limitations in terms of KM approach, cryptographic support, SLA,
38
and data integrity verification services. The summarized information of industry
implemented cloud storage services and their limitations is provided in Figure 2.6.
Figure 2.6: Limitations of Amazon S3 and GCS
The shortcomings of existing cloud storages can raise information
confidentiality and integrity concerns, therefore these services are not trusted by the
organizations to store mission critical data such as healthcare, banking or
government related records. The common vulnerabilities of S3 and GCS are
described in following sub-sections.
39
2.8.1 Vulnerable Key Management Approach
GCS and Amazon S3 automatically encrypt data before it is written to disk
using SSE process without additional charges. There is no setup or configuration
required and no need to modify the way clients access the service. Data are
automatically and transparently decrypted when read by an authorized client without
requiring the users to undertake the burden of managing their private keys.
Although, CSPs protect the cryptographic keys of clients using the same hardened
KM systems that they use for their own encrypted data including strict key access
controls and auditing. However, this process of data and keys protection requires the
clients to undoubtedly trust the CSPs because the keys are managed by them. In
certain cases organizations can be concerned that some government authorities such
as National Security Agency (NSA) can obtain their keys from the CSP to decrypt
and illegally access or view their sensitive records (Ferreira, 2013).
In order to overcome clients’ concerns, alternative approach supported by
CSPs is to recommend the clients to encrypt their data before sending it to the cloud
storage or to perform SSE-C without storing their keys at the server. In this
approach, there are also implications in terms of KM, since clients are responsible to
manage and protect their data encryption and decryption keys from compromise to
an adversary, unintentional deletion or loss, whereas clients are not eager to
undertake such a responsibility as it requires managing hardware and software which
will add to the cost of overall service The clients must be facilitated with an
effective and secure KM approach by involving a TTP and without being anxious
about maintaining a secure key storage service to protect their data confidentiality
and integrity.
40
2.8.2 Inadequate Cryptographic Support
The data stored at S3 or GCS, are encrypted using AES algorithm. S3
supports AES-256 bit, whereas GCS supports AES-128 bit. At present AES-256 bit
is proven secure but there is criticism on GCS using AES-128 bit because it has been
already proven as not strong enough to protect confidential data, and it can be
cracked by today’s technology in a reasonable amount of time (Ferreira, 2013).
Beside the security concerns, encryption techniques of S3 and GCS have limitations
in terms of functionality and usage. For example when data is encrypted at the GCS
or S3, it cannot be processed to perform computations unless it is decrypted to its
original format, so each time users are required download their data to perform these
tasks. In other words, in existing cloud storage services, there is no support for FHE
which can enable the users to perform live computations on their data while their
privacy remains preserved (Murali et al., 2013). FHE is a good basis to enhance the
security measures of un-trusted systems or applications that stores and manipulates
sensitive data. Although, FHE is implemented but it is not proven efficient for
practical use in challenging cloud computing services such as S3 and GCS (Kui et
al., 2012; Wang et al., 2013; Stefania et al., 2012).
2.8.3 Exclusion of Security Assurance in Service Level
Agreements
Storing important data with cloud storage providers comes with serious
security risks. CSPs can leak, modify, or return inconsistent data to different users,
which may happen due to bugs, crashes, operator errors, or misconfigurations.
Furthermore, security breaches can be much harder to detect or more damaging than
accidental ones, for example external adversaries may penetrate the cloud storages,
or employees of the service provider may commit insider attacks. These concerns
have prevented security conscious enterprises and consumers from adopting cloud
41
computing despite its benefits. Usually, security and privacy assurance is provided
to customers by signing an SLA which involves all the terms and conditions to
guarantee fairness of service for the client as well as the CSP. Unfortunately, none
of today’s cloud storage services including Amazon and Google provide security
assurance in their SLAs. For example, SLAs of S3 and GCS only guarantee service
uptime that if availability falls below 99.9% clients are reimbursed a contractual sum
of money. As cloud storage moves towards a commodity business, security will be a
key parameter for the providers to differentiate themselves. The existing SLAs are
also fixed, non-negotiable and unable to satisfy the security as well as privacy
requirements of organizations dealing with confidential data to adopt cloud storage
services (Asha, 2012). CSPs or involved TTPs must detect violations of security
properties and decide the penalities according to the settled SLA.
2.8.4 Untrustworthy Data Integrity Verification Services
In a cloud computing environment, malicious outsiders and semi-trusted CSPs
are considered as potential adversaries. Malicious outsiders can be economically
motivated and they have the capability to attack cloud storage servers in order to
subsequently violate or delete clients’ data while remaining undetected. The CSPs
are semi-trusted in the sense that most of the time they behave appropriate and do not
deviate from the prescribed protocol execution. However, CSPs might neglect to
keep or deliberately delete rarely accessed data files that belong to ordinary cloud
data owners for their benefits (Cong et al., 2013). Since clients no longer acquire
their data locally, it is of critical importance for the clients to ensure that their data
are being correctly stored and maintained. Clients should be equipped with certain
security measures so that they can periodically verify the correctness of their
remotely located data even without existence of local copies (Ushadevi and
Rajamani, 2012).
42
In order to guarantee customers that their data always remains intact at the
cloud storage, S3 regularly verifies the integrity of data stored using checksums. If
integrity violation is detected, it is repaired using redundant data. Moreover, S3
calculates checksums on all network traffic to detect corruption of data packets when
storing or retrieving data. GCS provides CRC header that allows clients to verify the
integrity of object contents. Integrity checks are automatically performed on all
uploads and downloads. Since CSPs are semi-trusted they may decide to hide data
corruptions caused by server hacks or byzantine failures to maintain their reputation.
Therefore, these data auditing process conducted by the CSPs are not trustworthy
because clients do not have confidence that CSPs will not hide their own violations.
Alternatively, clients do not have knowledge of initiating data integrity verification
process, feasibility or resources to monitor their data.
Under such circumstances, CSPs such as Amazon and Google should allow a
TTP to conduct data auditing services on behalf of clients where TTP will act as a
neutral middleware to perform the required tasks without favoring the either party.
TTP should be able to efficiently audit the cloud data storage without requiring a
local copy of data and without enforcing any additional burden on data owners.
Unfortunately, at present the cloud storage vendor are lacking such a solution, hence
it reduces the clients’ trust on CSPs when it comes to verifying the correctness of
their sensitive records at the cloud storages (Ling et al., 2011).
2.9 Confidentiality and Integrity Preserving Cloud Storage
Models
Several researchers from academia and industry have designed as well as
developed solutions to overcome confidentiality and integrity concerns for using
cloud storage services. These solutions are formulated using security techniques,
43
algorithms, procedures or processes. This section provides in-depth analysis on five
significant related work contributions.
2.9.1 Secure Cloud Storage Integrator for Enterprises
Seiger et al. (2011) mentioned that enterprises dealing with sensitive data
which is subject to legal regulations are concerned about data security and privacy
for using cloud storage services. In these scenarios, all files and information need to
be protected when leaving a company’s intranet. They proposed system architecture
for securing an off-site data storage, as shown in Figure 2.7. The key component of
proposed architecture is the proxy server which is responsible for integrating external
storage services from the internet, offering new resources to the client computers on
the intranet and securing all data transfers as soon as they leave the trusted enterprise
network zone. When a user, usually a company employee copies a file to a desired
folder on network drive, it will be cached on the proxy and it will be divided by the
server in several parts using an information dispersal algorithm. The resulting data
slices will be redundantly stored either on a locally attached storage such as Network
Attached Storage (NAS) or at one of the online cloud drives provided by Amazon,
Dropbox or Rackspace, using the protocol adapter.
The protocol adaptor provides integration between a client PC and the cloud
storage. During the entire process, additional information and metadata belonging to
outsourced file will be stored in a database which allows the cached file to be deleted
from proxy server after the storage procedure is completed successfully. Depending
on whether the storage node where a data slice should be stored is trusted or
untrusted, additional encryption of the slice will be performed using AES which is
executed by the Bouncy Castle cryptography library.
44
Figure 2.7: Cloud Storage Integrator
(Seiger et al., 2011)
In order to increase server performance, data fragments will not be encrypted
on local storage locations. Data integrity will be achieved using AES cipher based
message authentication code operation mode for encryption which produces an
additional message authentication code for the single data fragment. This will allow
checking the state of a slice and replacing it with a healthy one in case of an integrity
violation.
45
2.9.2 Data Confidentiality and Integrity Verification Using User
Authenticator Scheme in Cloud
Nirmala et al. (2013) stated that existing solutions available for network
security are not effective to be implemented and used for cloud security since they
are not suitable for a cloud environment. There is a need for new data protection
approaches for enhancing cloud storage security. They proposed a user authenticator
scheme which provides solution for preserving data confidentiality as well as
integrity when adopting and using third party cloud storage services. Nirmala et al.
(2013) assumed that a file F is divided into N blocks. Each block is encrypted using
AES by requesting the server, as shown in Figure 2.8. Let puk be the public key and
prk be the private key of the data owner. Initially, a data block will be encrypted
with puk to achieve data confidentiality.
Figure 2.8: Preserving Data Confidentiality
(Nirmala et al., 2013)
46
Figure 2.9: Data Integrity Verification
(Nirmala et al., 2013)
Figure 2.10: Data Updating
(Nirmala et al., 2013)
47
The encrypted file will be then made to produce a hash value using the Secure
Hash Algorithm (SHA), and digital signature will be generated as well as attached
for each encrypted hash code to ensure authentication. Using the proposed approach,
data owner can verify the integrity of outsourced data by requesting the server to
send the hash code for a specific block such as Ni where i=1, 2, 3…n. Data Integrity
will be verified by comparing the hash code of the data that is stored, with the one
which will be retrieved from the cloud server, as shown in Figure 2.9. Through
requesting the server, user is also able to perform modification, deletion or updating
operations on data while it remains encrypted, as shown in Figure 2.10. However,
after the completion of each operation, metadata will be updated.
2.9.3 Secure Storage Services in Cloud
Cloud users typically do not have any control over the cloud storage servers,
so there is an inherent risk of data exposure to third parties by the CSP. The data
must be properly encrypted both in motion and at rest. There is an additional risk of
data tampering by a third party or by the CSP. In order to overcome the issues of
data confidentiality and integrity at the cloud storages, Nepal et al. (2011) presented
a service-oriented solution known as TrustStore for provisioning secure storage
services in a hybrid cloud environment. It can ensure safety, confidentiality, and
integrity of stored data in a way that is independent of actual storage services.
TrustStore includes two significant components, i.e. KM and Integrity Management
(IM) service providers. Figure 2.11 shows the process of a client using the
TrustStore to upload files. Using the TrustStore, users will first enter personal
credentials to create a profile which is location of the files to be stored at the client
side. Once the files are dragged and dropped from user interface to TrustStore client,
they will be fragmented, encrypted, and hashed or signed.
48
Figure 2.11: TrustStore Hybrid Cloud Service
(Nepal et al., 2011)
The encrypted fragments will be uploaded to Cloud Storage Service Provider
(CSSP) and encryption keys will be stored with KM service provider while hashes
will be stored with IM service provider. When users want to access stored files later,
they will start the TrustStore client and load the previously created profile by
entering the credentials which they used to encrypt the profile. The TrustStore will
present the directory tree of stored files similar to a file browser. Upon double-
clicking the files, users can retrieve and open them from the TrustStore. The
fragments will be downloaded from the closest CSSP. Data integrity will be verified
using the signatures from the IM service provider and the fragments will be
decrypted using the keys from the KM service provider. If integrity is violated, a
different CSSP will be tried. Finally, the fragments will be joined together and the
retrieved file will be made available to the associated user.
49
2.9.4 Data Confidentiality in Storage-Intensive Cloud Applications
The third party managed cloud services offer high availability and elastic
access to the resources. Unfortunately, taking advantage of these services requires
organizations to accept a number of serious data security risks. Factors such as
software bugs, operator errors and external attacks can all compromise the
confidentiality of sensitive applications and data on external clouds by making them
vulnerable to unauthorized access by malicious parties. Puttaswamy et al. (2011)
proposed an approach named as Silverline which is used to balance between
confidentiality and computation on cloud. It identifies and filters the functionally
encrypted data, which means the data that can be encrypted without impacting
functionality of the application. This model encrypts the string type data which does
not need calculation such as name, address or contact number, and it leaves the data
unencrypted which requires computation such as querying the age of an employee,
since it will be obtained by calculating the difference between current data and date
of birth for that particular employee. In order to provide an efficient KM service, the
authors suggested encrypting the data set with a number of different keys. The
whole data will be divided into subsets and encrypted using symmetric cryptography.
The corresponding keys will be assigned to privileged users by the data owner and
they will be stored in organization’s database server, as shown in Figure 2.12. Each
encrypted dataset may consist of several rows and columns of a database. If a group
of users is privileged to access the same dataset, they will be assigned with the same
key, whereas if certain data belongs to a single user for instance the manager,
corresponding key will be provided only to that specific user. Users will fetch the
keys from the server and they will be retrieved and stored in a user’s browser under
certain security mechanisms such as Hypertext Markup Language (HTML)-5 post-
message calls and iFrames. When users get the appropriate key(s), they will query
the cloud server to fetch data. The input parameters to query will be sent in the
encrypted form. The cloud will execute the query using the encrypted input and then
it will return the results also in encrypted form.
50
Figure 2.12: Key Management and Data Confidentiality
(Puttaswamy et al., 2011)
The users’ device will decrypt and display the data. The integrity of data will
be verified using hash-based message authentication code, when users will calculate,
add hash with data slot and send it to the cloud together with the data. The
confidentiality of a cell will be considered as violated when the key to decrypt it, is
given to a user that does not belong to label of that cell. Puttaswamy et al., (2011)
assumed the use of SLA together with Silverline will ensure that CSP will provide
the agreed level of services such as required availability or security parameters.
51
2.9.5 Cloud Storage Integrity Checking Using Encryption
Algorithm
When data are stored at the cloud, it is important to ensure that the stored
records are neither compromised nor corrupted. The existing protocols reveal
clients’ sensitive data by sharing the encryption and decryption keys with the cloud
server. Varalakshmi and Deventhiran (2012) designed a conceptual model of a cloud
storage architecture using an encryption algorithm with dynamic small size key to
ensure data security without compromising information to the cloud server. It
involves a TTP broker that is comprised of five major components, i.e. Partitioner,
Encryptor, Hash Tag generator, Verifier, and Database Manager, as shown in Figure
2.13.
Figure 2.13: Cloud Storage Security using Broker
(Varalakshmi and Deventhiran, 2012)
52
Using the proposed model, initially a client’s request for storing files will be
sent to the request handler and files will be queued up at the Encryptor. The files
will be encrypted and sent to the partitioner that will divide them in segments which
will be further provided to hash tag generator for calculating the hashes using SHA.
The calculated hashes will be stored with database manager. When client wants to
retrieve a file from the cloud storage, retrieval request will be sent to request handler
which will pass it to the verifier. Considering the client’s identity, the verifier will
retrieve the corresponding details from database and it will request the respective
VMs on the cloud. The verifier will generate the hash key of the encrypted segments
which is retrieved from the VMs and it will compare it with the stored hash value. If
all segments match, verifier will conclude that the file is intact and it will combine all
the encrypted segments of that file based on the sequence number. Finally, the
broker will decrypt the combined file and it will send it to the client. If there is any
mismatch for the newly calculated hash with the originally stored, the verifier will
conclude that file has been compromised and same information will be sent to the
client.
2.10 Critical Analysis on Related Work Solutions
In order to develop a widely accepted cloud storage model, numerous
researchers have proposed solutions for securing cloud storages. The contributions
proposed by Seiger et al. (2011), Nirmala et al. (2013), Nepal et al. (2011),
Puttaswamy et al. (2011) and Varalakshmi and Deventhiran (2012), were reviewed.
Each contribution proposed approaches to overcome confidentiality and integrity
concerns for using cloud storages. However, there are certain limitations identified
in these solutions that require further improvements as well as enhancements. For
example the cloud storage model designed by Seiger et al. (2011), is dependent on
proxy server which is maintained, operated and owned by the client’s organization.
Under certain cases, if proxy is down, the entire cloud service will be interrupted.
53
The outage of proxy is a serious and frequent concern for the companies. In order to
analyze downtime and impact of the proxy failures, Computer Associates
Technologies (CAT) conducted a survey from 200 companies across North America
and Europe. The survey represented that companies are facing averagely 14 hours of
downtime per year which cost roughly $150,000 to each organization (Harris, 2011).
Considering the survey results, it is advisable to depend on pure cloud computing
solutions which guarantee 99.99% uptime rather than relying on proxy based
unreliable solutions. The summarized information of these contributions is shown in
Figure 2.14.
Figures 2.14: Academia Implemented Cloud Storage Models
54
Another limitation of the model designed by Seiger et al. (2011) is regarding
use of multiple CSPs, as this approach will increase the probability of confidentiality
breaches, as well as SLA complexities, in addition to the cost of acquiring service
from multiple CSPs (Victor et al., 2013). Using this model, users are also not able to
perform operations on data while it remains encrypted at cloud storages, in order to
modify data, users are required to decrypt it each time same as in the cases of S3 and
GCS services which is due to absence of an efficient FHE scheme. On the other
hand, the solution provided by Nirmala et al. (2013), enables the user to insert,
modify or delete data without decryption. However, there is no possibility of
performing computations on data while it remains encrypted. While using cloud
based solutions, users must be able to perform live computations on their data while
storing it securely. The cloud storage models provided by Nirmala et al. (2013) and
Seiger et al. (2011), are based on storing cryptography keys at a local storage with
the enterprise proxy server, this process enhances the risk of losing keys to an
adversary, if the server is vulnerable to attacks or KM approach is not secure. If
cryptographic keys are lost it may lead to permanent loss of confidential data
(Rajasekar and Chris, 2010).
In order to ensure the correctness of clients’ data, a cloud storage solution
must support trusted data auditing services. Nirmala et al. (2013), addressed this
approach by enabling the data owners to verify their data integrity, but this approach
is considered as vulnerable in the list of cloud security recommendations issued by
CSA (2011) and mentioned by Nithiavathy (2013), because clients do not have
expertise of conducting professional data auditing services, and when data integrity
violation is detected, there is no suggested mechanism of recovering the data to its
original state.
In the cloud storage model provided by Varalakshmi and Deventhiran (2012),
cloud broker is in charge of performing significant tasks such as encryption,
decryption and partitioning. However, in certain cases, a broker can be disgruntled
as well (Ranchal et al., 2010), but using this proposed model, there is no protection
mechanism from malicious activities of the broker. The information sent is not
55
protected during the transfer, which will enhance the possibility of confidential data
revealing to a MITM (Le et al., 2013; Kumar and Dubey, 2013). The protection of
data at rest is achieved using symmetric cryptography algorithms, which are
normally considered as weak methods for securing the data because the secret key is
shared with several data receivers (Yogesh Kumar et al., 2011). The leakage of
secret key will lead to breach of data confidentiality.
Alternatively, in the model proposed by Puttaswamy et al. (2011), keys are
stored securely with the users but they are insecure during the transmission. A
MITM can get access to plain keys (Fadadu et al., 2012). Data decryption using this
approach is performed by the proxy server. Considering the frequent problems of
proxy server outage, it must be noteworthy that users will not be able to decrypt their
data retrieved from the cloud storage whenever the proxy is down. The clients are
also required to use personalized software for encryption and decryption operations
which will enhance the cost and maintenance responsibilities (Asghar et al., 2013).
The KM approach proposed by Nepal et al. (2011) is effective as compared to those
suggested by Seiger et al. (2011), Nirmala et al. (2013) and Puttaswamy et al.
(2011). Keys are stored with KM service provider instead of the client, which is the
recommended security guideline for using cloud storage services (CSA, 2011).
However, likewise the model of Puttaswamy et al. (2011), keys are not secure during
the transfer to KM service provider. Moreover, under this approach, users are
required to be within the company network to use the cloud storage services because
proxy is communicating with computers inside the enterprise network, this process
reduces the service accessibility. The encryption algorithm proposed by Nepal et al.
(2011) is also static, users are not able to modify or perform operations on data when
it remains encrypted.
56
2.11 Contribution and Road Map of Research
In order to achieve the research aim, this research focused on overcoming the
limitations of the models identified in related work contributions by designing as
well as developing an improved and enhanced model. Filtering out the limitations,
this research adopted the strengths of related work contributions in the development
of SCSM. For example, the concept of data integrity verification was implemented
as suggested by Seiger et al. (2011) but further improving it by eliminating the
complexity of using multiple CSPs. Secondly, the implemented KM approach is
similar to the one provided by Nepal et al. (2011) but enhanced it using KM
recommendations suggested from German Federal Office of Information Security
(GFIS). This research also enhanced the functionality of encryption technique used
by Nirmala et al. (2013) in order to enable the users to perform limited computations
on data while it remains encrypted. The final contribution of this research is based
on lessons learned from the literature review with new advancements to develop a
new model, i.e. SCSM which preserves the data confidentiality and integrity for
using cloud storage services and ensures the delivery of trusted services to the
clients. SCSM is formulated using existing security mechanisms but it is entirely a
distinctive and new approach to address the research problem. It is based on a
combination of five major components mentioned as follows:
Multi-factor authentication and authorization process using RBAC with
CRSCG.
Partial homomorphic cryptography using RSA algorithm.
TTP services i.e. secure KM approach and data auditing process.
Implementation of 256-bit SSL.
SLA for the organizations storing highly sensitive data on cloud.
The multi-factor authentication and authorization process of SCSM is the
initial and significant step in preserving data confidentiality as well as integrity. This
57
component was implemented to create an additional layer of security for accessing
the system apart from traditional username and password authentication. It will
prohibit an invalid access to system operations such as encryption or decryption tasks
even when username and password of a privileged user are compromised to an
adversary since performing each task is controlled using CRSCG with RBAC. When
mission critical data of the clients are stored with an untrusted CSP, confidentiality
becomes a paramount concern. This research implemented the component of partial
homomorphic cryptography using RSA algorithm to protect sensitive records of
clients from illegal access and view of external attackers as well as malicious
insiders. Using this approach, clients can also perform certain number of operations
on their records while their privacy remains preserved at the cloud storage.
When the data are encrypted, third parties or the CSPs cannot decrypt the
clients’ information because the private keys are only accessed by data owners.
However, it is to ensure that encryption cannot guarantee correctness of data. In
other words, it cannot detect violations such as intentional modification, alteration
and deletion of confidential data by illegal parties. In order to address this
shortcoming, this research implemented the component of data auditing process
using TTP. Using this process, a TTP can audit clients’ data to ensure that it always
remains intact. If data has been violated, TTP will report to data owners and actions
will take place by considering the constructed SLA.
Although, the implemented RSA partial homomorphic cryptography preserves
the data confidentiality, but if private keys are lost from the clients or stolen by an
adversary, then encryption will be meaningless. In order to overcome this limitation,
this research implemented the component of secure KM approach that protects the
secret keys of clients from generation until destruction phase. The secret keys are
not accessible by anyone except the data owner. The significant parameters of the
clients such as usernames, passwords, metadata and keys are protected from MITM
attacks by implementing 256-bit SSL to encrypt the entire communication channel.
58
Figure 2.15: Research Road Map
59
Besides the data security concerns, clients also need to ensure that CSPs will
provide the required level of security for their data and they will always perform as
expected. SCSM involves an effective SLA which is based on core security and
privacy elements. This SLA is designed to achieve the aim of assisting organizations
dealing with highly sensitive data to adopt trusted cloud storage services without
confidentiality and integrity concerns. The entire process of constructing the SCSM,
its uniqueness, description and selection of its components including algorithms,
processes and techniques used to preserve confidentiality and integrity for cloud
storage services are further described in Chapter 4 with greater details. The roadmap
of this research is shown in Figure 2.15, which depicts the limitations of related work
research and the contribution of this research.
2.12 Summary
Cloud computing consists of private, public, hybrid, and community
deployment models. Cloud storage service is mainly offered to the business
organizations via public cloud which is managed and controlled by a third party CSP.
Although cloud storages are a cost-effective, but they incur data confidentiality and
integrity issues. Existing research is focused to overcome the emerging concerns
identified in cloud storage services due to their attractive advantages for the business
organizations. The significant contributions proposed by Amazon, Google, Seiger et
al. (2011), Nirmala et al. (2013), Nepal et al. (2011), Puttaswamy et al. (2011), and
Varalakshmi and Deventhiran (2012) were reviewed as well as critically analyzed.
Considering the identified limitations of the related work contributions, in order to
address the research problem, this research formulated the anatomy of SCSM from
the lesson learned during multiple sessions of the literature review. The core design
and development of SCSM are systematically described in Chapters 4 and 5 with in-
depth details and analysis.
60
CHAPTER 3
RESEARCH METHODOLOGY
3.1 Introduction
Cloud computing requires novel SE approaches to deliver agile, flexible,
scalable, as well as secure software solutions with full technical and business gains
(Raj et al., 2013). Software security tools and techniques have been emerged to
build secure applications, but due to the lack of understanding for security related
vulnerabilities, developers have not been successful in applying appropriate SE
principles or methodologies when developing secure software systems such as cloud
storage services. Usually, it is not considered as a good practice to apply software
security techniques after a system has been developed. In order to develop secure
cloud solutions, software developers should adopt methodologies and approaches
which recommend applying security approaches to the applications during the entire
development cycle as a built-in security procedure (Adebiyi et al, 2012). In this
research, we adopted a SE research methodology, where the focus was to analyze the
security requirements of cloud storage models at each phase of methodology, and to
apply the required security controls, techniques or approaches to develop the SCSM.
The methodology of this research consists of five major phases which include
Literature review, Analysis, Design, Implementation, and Evaluation, as shown in
Figure 3.1. Documentation is considered as an activity that is carried out in parallel
with each phase of methodology.
61
Figure 3.1: Research Methodology
The first two phases of methodology were focused on reviewing the existing
research contributions and to analyze their loop-holes. At the design phase, system
architecture was designed to analyze the security required for developing the SCSM,
whereas use-case diagram was designed to demonstrate the access control privileges
of the system users. Similarly, sequence diagram was designed to illustrate the
overall security approach of SCSM in performing operations such as data encryption
and decryption process, data auditing services, data recovery process, data updating,
downloading, and uploading tasks. At the development phase, the target was to
construct the system using secure programming techniques by considering security
requirements for preserving data confidentiality and integrity at the cloud storages.
In order to achieve this goal, we implemented, CRSCG, RBAC using glassfish
server, data encryption using RSA, data integrity verification using Secure SHA-1
with DSA, KM approach using sound steganography, and secured the overall data
transmission by configuring 256-bit SSL. It was also assumed that SCSM will be in
Evaluation
Implementation
Design
Analysis
Literature Review
62
compliance with the designed SLA. Likewise design and development phases,
security of SCSM were also analyzed at the evaluation phase. Security of the system
was tested using various methods, and results proved that this research solved the
stated problem area and achieved its goal of designing as well as developing the
SCSM by following an effective SE research methodology. In the remainder of this
chapter, Section 3.2, describes each phase of methodology with greater details,
Section 3.3, describes as well as illustrates the research activities and outcomes.
Section 3.4, presents the summary of this chapter.
3.2 Research Methodology
The phases of research methodology are described with details in the
following sub-sections.
3.2.1 Literature Review
In this research, we conducted in-depth SE based Systematic Literature
Review (SLR), which was focused on identifying, evaluating and interpreting all
available research relevant to the research questions, and topic area of interest. SLR
synthesis has been widely applied to medicine and healthcare field, and it has been
proved as valuable for enabling the researchers to summarize complex scenarios,
identifying gaps and overcoming harmful interventions. Researchers can get the
clear reporting and evidence to formulate future planning in healthcare domain by
performing SLR. The successful use of the SLR in different fields can adequately
prove that it is an effective and efficient solution for performing overview on specific
topics. It is a critical study work for the researchers to get deep understanding about
63
the research area. Due to the complexity of SE solutions, SLR has become an
important research methodology in SE field since 2004 to produce valuable
contributions (Zlatko et al., 2012).
In this research, the objective of conducting SLR was to summarize the
existing evidence concerning cloud storage models, analyze the empirical evidence
of their benefits and to identify their limitations or vulnerabilities in order to suggest
areas for further investigation. Conducting SLR enabled this research to produce a
clear framework in order to appropriately position new research activities. SLR was
conducted in multiple sessions and this process continued until the end of research.
The initial phase of SLR started with preliminary investigation during the first two
semesters of study where the focus was to conduct background study in field of
cloud computing to understand its significant concepts, security challenges for
adopting cloud technology, and data protection mechanisms used for securing cloud
storages. This investigation enabled us to select an emerging research topic by
considering the identified problem area. The second phase started with reviewing the
existing industry implemented cloud storage models such as S3 and GCS, and
analyzing their benefits as well as limitations. The third phase focused on reviewing
the existing work of various authors from academia who provided solutions for
overcoming the cloud storage security issues, and we focused on five most relevant
and significant contributions related to data confidentiality and integrity concerns
provided by Seiger et al. (2011), Nirmala et al. (2013), Nepal et al. (2011),
Varalakshmi and Deventhiran (2012), and Puttaswamy et al. (2011). The process of
refining and analyzing the related work was an iterative task. At the end, literature
was also reviewed in determining the effective methods to evaluate the developed
contribution of this research.
64
3.2.2 Analysis
Existing solutions proposed by the authors discussed in literature review were
quite useful and significant in developing the SCSM. This research critically
analyzed the related work in the field of cloud storage security contributed by
industry as well as academia researchers, and outcomes showed that there are major
limitations in these solutions which require further improvements as well as
enhancements. For example while analyzing S3 and GCS, it was identified that these
services have several advantages but they incur limitations which must be addressed
in order to overcome potential data confidentiality and integrity concerns. S3 and
GCS, have significant vulnerabilities in terms of KM approach, cryptographic
support, SLA, and data auditing services. Due to these shortcomings, these cloud
storages are not trusted by the business organizations to store mission critical data
such as healthcare, banking or official government related records. The researchers
from academia also designed secure storage models, but these contributions also
incur loopholes in their data confidentiality and integrity preserving approaches.
The cloud storage model proposed by Seiger et al. (2011) is dependent on an
unreliable proxy server which is maintained, operated and owned by the client’s
organization, and this model raises the SLA manageability issues with multiple
CSPs. The solutions proposed by Puttaswamy et al. (2011) and Nepal et al. (2011)
are based on insecure KM approaches since the cryptography keys are not protected
during the transmission. In order to ensure the correctness of clients’ data, Nirmala
et al. (2013) enabled the data owners to verify their data integrity but this approach is
vulnerable as mentioned in the list of cloud security recommendations issued by
CSA (2011), and using this approach, there is no suggested mechanism of recovering
the violated data to its original state. Varalakshmi and Deventhiran (2012),
addressed this shortcoming by involving a broker for data auditing service, however
in their model, there is no protection mechanism from malicious activities of the
broker. The identified limitations also proved that existing contributions are not able
to provide trusted cloud storage services by preserving data confidentiality and
integrity. However, this research learned lessons from strengths and valuable
65
contributions of related research by filtering-out their vulnerabilities, whereas the
remaining gaps were fulfilled by major contributions of this research.
3.2.3 Design
Software design is an activity of SDLC in which software requirements are
analyzed in order to produce a description of the software’s internal structure that
will serve as the basis for its construction. Software design plays an important role
in software development, during this phase, software engineers produce various
models that form a blueprint of the solution to be implemented. When the design
phase is completed, it will enable the programmers to easily implement the software
components as the designed models will make the requirements clearly
understandable. Software design also plays a crucial role in analyzing and ensuring
security of the system. Design for security is concerned with how to prevent
unauthorized disclosure, creation, change, deletion, or denial of access to information
and other resources. It is also concerned with how to tolerate security related attacks
or violations by limiting damage, continuing service, speeding repair, failing and
recovering securely. Access control is a fundamental concept of security, and one
should also ensure the proper use of cryptology (Bourque and Fairley, 2014).
This phase of research methodology was based on our core findings. At this
phase, the components of SCSM were designed with utmost care to achieve the
system security requirements. The main contribution and all associated components
were deeply described and designed using various methods to clarify as well as
properly justify their use and advantages over existing models. SCSM was also
designed using use-case, sequence and architecture diagrams to illustrate its
requirements and to analyze its security mechanism. The process of designing and
description will not only help the readers to understand the contribution of this
research, but it also assisted us in implementation of the developed system. This
66
phase clarified major contributions of the research and their goal to preserve data
confidentiality and integrity for using cloud storages with trusted services. One of
the significant tasks was to formulate an effective SLA which will enable the
organizations to trust the cloud storage offerings and it will enforce the CSPs to
follow the required service constraints due to implications of legal law. The SLA
was formulated and its key elements were presented in a tabular form for enhanced
clarification. The work done at design phase was very critical since each outcome at
this stage was input for the implementation phase. However, the process from design
to evaluation is iterative, so it was possible to modify any contribution if the
evaluation results are not according to the expectations.
3.2.4 Implementation
Software implementation refers to the development and testing of the
proposed system. Sometimes a preliminary solution such as a prototype can be
developed initially to test the proposed solution under certain conditions. The
construction activity encompasses a set of coding and testing tasks that lead to an
operational software which is ready for delivery to the customer or end user. At this
stage, software developers consider the software design as an input and select
appropriate tools, techniques, programming languages as well as supportive
frameworks to code and execute the project (Bourque and Fairley, 2014). Software
developers work closely with the software designer and requirement engineers to
ensure correctness of the implemented components by comparing with the system or
user requirements. In modern SE, implementation can be the direct creation of
programming language source code such as java, or the automatic generation of
source code using an intermediate design-like representation of the component to be
built, or the automatic generation of executable code using a fourth-generation
programming language such as Visual C++
(Pressman, 2010). In this research, SCSM
was developed by implementing its each component by carefully analyzing the
software design but without auto-generating the source code from the design.
67
This phase of research was quite challenging as it required working with
complex coding techniques. The development of SCSM prototype involved the use
of various programming languages and frameworks which include Java, Extensible
Mark-up Language (XML), Java Server Faces (JSF), Extensible Hypertext Markup
Language (XHTML), Java Server Pages (JSP) and servlets, with glassfish server.
The design of SCMS was very helpful for the implementation as it depicts all the
steps systematically. Initially, the system was developed on local platform by
considering the requirements of cloud hosting environment. After the successful
development, it was deployed on a cloud computing platform, where it was hosted on
a Linux-based Virtual Private Server (VPS) containing a Community Enterprise
Operating System (CentOS). The deployment process was challenging at first, but
due to the support of CSP’s technical staff it became possible to overcome the major
deployment issues such as setting up the servers and installing the 256-bit SSL.
3.2.5 Evaluation
A SE system that manages sensitive information is a target for improper or
illegal penetration. In information security domain, penetration spans a broad range
of activities such as hackers attempt to penetrate systems for amusement, disgruntled
employees penetrate systems for revenge, and dishonest individuals penetrate
systems for illicit personal gains. In SE field, system testing is concerned with
evaluating the non-functional system requirements such as security, speed, accuracy,
and reliability. In particular, security testing verifies the confidentiality, integrity,
and availability of the systems and its data (Bourque and Fairley, 2014). During the
security testing, a tester plays the role of an individual who desires to penetrate the
system. System security testing ensures that software protects data and maintains
security specifications (Pressman, 2010). Since this research is concerned with
development of a secure cloud storage model, we conducted system security testing
which was focused on verification that the software is protected from potential
malicious internal as well as external attacks. The designed SCMS is based on five
68
major components which work together as a system. Each component of SCSM was
developed to accomplish certain objectives. The evaluation strategy of this research
was based on using several testing methods such as mathematical evaluation, web-
scanning tools, security scanners, surveys, and system security analysis to ensure that
each component and the aggregate system are successful to achieve their intended
objectives.
The implemented partial homomorphic RSA cryptography was evaluated
using extended euclidean algorithm. SLA was evaluated using a survey which
involved information security analysts, data auditors, cloud computing researchers,
developers, architects and security specialists from various organizations including
well-known CSPs such as Amazon, IBM and HP. Implemented 256-bit SSL was
evaluated using Qualys web-based evaluation methodology. The proposed KM
technique was evaluated by considering the security recommendations provided from
GFIS. The data auditing process was evaluated using system security analysis. The
process of multi-factor authentication and authorization together with the final
SCSM were also evaluated by the same survey which was used to evaluate the SLA.
Finally, the developed SCSM prototype was evaluated using Google skipfish security
scanner. The evaluation results discussed in Chapter 6 proved that SCSM is
successful in preserving data confidentiality and integrity at the remotely located
cloud storages.
3.3 Research Activities and Outcomes
This section illustrates the holistic view of completing the research activities.
In order to answer the research questions, the research objectives are achieved by
systematically following the methodology. Since the duration of research was three
years. The first objective was achieved in Phase-I, and the second objective was
achieved in first session of Phase-II, third objective was achieved in second session
69
of Phase-II, and rest of the objectives were achieved in Phase-III, where each phase
comprises on a year of the research. However, the process of research refinement
continued until the final stage, where the focus was to update the research with the
latest findings. The gross information on research activities is illustrated in Table
3.1.
Table 3.1: Research Activities and Outcomes
Phase Research Questions Research Objectives Methodology Deliverables
P-I
What are the existing
security models that
have been designed,
developed or proposed
by the industry and
academia researchers to
overcome data
confidentiality and
integrity concerns for
using cloud storage
services?
To investigate and
obtain in-depth
understanding of
existing security models
that have been proposed
by the industry and
academia researchers to
overcome data
confidentiality and
integrity concerns for
using cloud storage
services
Literature
Review
Identification
of research
problem, in-
depth analysis
on Amazon S3
GCS, and five
significant
related work
contributions.
P-II
What are the limitations
of existing industry and
academia implemented
cloud storage models
that raise
confidentiality and
integrity issues which
prevent organizations
dealing with sensitive
data from adopting
cloud storage services?
To critically analyze as
well as explain the
limitations or gaps
which have been
identified in the existing
industry and academia
implemented secure
cloud storage models.
Analysis Related work
limitations and
road map of
research.
70
P-II
How to design a model
that preserves data
confidentiality and
integrity at cloud
storages as well as
ensures the delivery of
trusted services to the
clients?
To design an improved
and enhanced secure
cloud storage model
which preserves data
confidentiality and
integrity, as well as
ensures the delivery of
trusted services to the
clients by considering
their data security
policies.
Design Detailed
description,
formulation
and design of
SCSM with its
advantages and
applications.
PIII
How to develop a
model that enables the
clients to store and
process their data at
cloud storages with
consistent data
integrity,
confidentiality and
trust?
To implement and
deploy a web-based
prototype on a cloud
computing infrastructure
which facilitates the
clients to store and
process their data at
cloud storages with
consistent data
confidentiality, integrity
and trust assurance.
Implementation Implementatio
n of SCSM as
a web-based
prototype and
its deployment
on a cloud
computing
infrastructure.
P-III
How to verify that the
implemented cloud
storage model is
successful in preserving
the confidentiality and
integrity of sensitive
data, and ensuring the
delivery of trusted
services to the clients?
To evaluate the
developed cloud storage
model in order to ensure
that it overcomes or
mitigates the data
confidentiality and
integrity concerns, and
gains the trust of
organizations dealing
with sensitive data to
adopt cloud storage
services.
Evaluation Evaluation
results of the
SCSM and its
each
component.
71
3.4 Summary
This research study is based on a SE research methodology which focuses on
identifying the research problem, analyzing the exiting contributions, developing an
innovative solution, and evaluation. The phases of methodology assisted the
research to systematically accomplish research objectives which actually lead to the
achievement of research aim. The process of research methodology clarified the
critical approaches, tools or techniques used to complete the significant tasks such as
literature review, analysis, design, implementation and evaluation. In order to further
clarify the aggregated research planning, this research presented a table of activities
and outcomes which visibly stated the timely completion of research questions with
their associated objectives, applied phase of the methodology and deliverables. The
settled research methodology was iterative in order to refine each phase, whereas the
process of documentation was parallel with the completion of each milestone.
72
CHAPTER 4
SECURE CLOUD STORAGE MODEL
4.1 Introduction
In order to contribute in the field of cloud security, this research provides an
improved as well as enhanced model named as SCSM, to overcome confidentiality
and integrity concerns for using cloud storage services and to ensure delivery of
trusted services to the clients by considering their organization’s data security as well
as privacy policies. SCSM is constructed by involving existing security methods and
techniques which include cryptographic algorithms, ACMs, and SSL. However, the
aim of this research was not to contribute in the field of information security or
cryptography, it was completely focused on developing a new model for cloud
storage security specific to data confidentiality and integrity in order to achieve
clients’ trust. The existing security or cryptographic methods are used in SCSM to
formulate a new approach for addressing the research problem.
The remainder of this chapter is organized in five sections. Section 4.2,
describes the building blocks of SCSM. Description and architecture of SCSM are
provided in Section 4.3. Section 4.4, describes each component of SCSM and their
construction mechanism with in-depth analysis. Section 4.5, describes the
aggregated execution process of SCSM. Section 4.6, presents the summary of this
chapter.
73
4.2 Building Blocks of SCSM
In order to overcome the issues of confidentiality and integrity for using cloud
storage services, and for providing trusted services to the clients, SCSM is focused
on achieving following set of requirements which are considered as its building
blocks.
i. Only authorized users can access the system and perform their privileged
tasks under a strict Access Control Policy (ACP) defined by the clients’
organization as mentioned in the SLA, described in Section 4.4.4.
ii. Grant adequate control to clients over their data, such as handling
encryption, decryption and Verification Metadata (VMD) generation
tasks.
iii. Clients can store, process and perform certain amount of transactions over
their data without decrypting it.
iv. Clients can consistently monitor their files on cloud without revealing
data to an unprivileged authority.
v. Clients can efficiently restore unintentionally written, modified or
violated records.
vi. Data transmission and communication channel among all the involved
end-users must be secure and encrypted.
vii. Storage services provided to the clients must be in accordance to their
data regulatory compliance and also considering other requirements such
as availability, reliability and accessibility levels specified in the SLA.
74
viii. Clients and the CSP must be equally protected from potential malicious
activities of each other.
4.3 Description and Architecture of SCSM
SCSM architecture consists of three end-users that include Client’s Admin
(CA), Trusted Third Party’s Admin (TTPA), and Cloud Service Provider’s Admin
(CSPA). This research assumed that TTP is a third party auditing organization from
the government sector, which can provide unbiased auditing and KM services for the
data owners and CSPs. TTPA is a trusted and certified professional employee
belonging to TTP, who has the expertise and capabilities to conduct data storage
auditing services. From the client’s perspective, this research assumed that CSPA is
an un-trusted, TTPA is semi-trusted and clients’ personal employee CA is a fully
trusted user. This is based on the assumption that TTP’s infrastructure might be
hijacked by an attacker or TTPA might be disgruntled.
Figure 4.1: Architecture of SCSM
75
Cloud server is the central component of SCSM. It analyses the access
control policies which are defined by the data owner and enforced by the CSP, and
performs corresponding operations requested from the privileged users, with the
restriction of RBAC with CRSCG, as described in Section 4.4.1. When CA uploads
files to the cloud storage, the contents are automatically encrypted on-the-fly using
RSA partial homomorphic cryptography. Data always remains encrypted at the
cloud storage until they are accessed by the user to download or decrypt for the live
changes. During the decryption operation, in order to maintain confidentiality, data
are not decrypted at CSP’s site and then sent to the client, but it gets decrypted on the
stream after departing from the cloud storage and prior to its arrival at CA’s machine,
as described in Section 4.4.2.
VMD, i.e. the digital signature is generated by CA for each file and it is stored
initially at CA’s machine for immediate integrity verification. After this process and
accomplishment of certain tasks such as data processing, CA sends the VMD and
cryptographic keys to TTPA for secure storage and auditing services, as described in
Section 4.4.5. TTPA performs the requested auditing services and shares the
auditing reports among all users. TTPA also reports to CSPA for data recovery if
data are violated or informs the CA if data integrity is intact, as described in Section
4.4.5. By viewing the audit reports, CSPA recovers the violated data, but in certain
cases if the data are not recoverable, then penalties may be imposed on CSP
considering the SLA, as described in Section 4.4.4.
SCSM also addresses the problem of malicious clients who impose fake
penalties on the CSP, for example claims for data loss without sending it to the cloud
storage or claims for integrity violation after intentionally modifying the data.
SCSM handles such cases by maintaining user access records log file at the cloud
server which is assumed to be highly secure against modifications by a malicious
user such as CA or CSPA. All communications and data sent or received are secured
with an additional layer of 256-bit SSL to overcome the possible MITM attacks, as
described in Section 4.4.3.
76
4.3.1 Roles and Responsibilities
Each involved role is expected to contribute efficiently by performing their
privileged tasks in order to maintain data confidentiality and integrity.
CA is responsible for the following tasks:
i. Generating cryptographic keys and encrypting data prior to its arrival at
the cloud storage.
ii. Decrypt, update and download data throughout the entire computing life
cycle.
iii. Generating VMD for selected files stored on cloud and sharing it with the
TTPA.
iv. Safe delivery and retrieval of significant parameters such as data hashes,
private and public keys.
v. Ensuring with the assistance of TTPA that data on the cloud is intact.
TTPA is responsible for the following tasks:
i. Conducting auditing services on behalf of the clients.
ii. Initiating the requested auditing process as directed by the CA.
iii. Providing response to the CA about the status of data, whether it is
tampered, deleted, manipulated or intact.
77
iv. Requesting CSPA to start the data recovery process from the backup zone
cloud storage.
v. Storing clients’ cryptographic keys securely with redundant backups.
CSPA is responsible for the following tasks:
i. Successful recovery of complete files or certain records that have been
violated by the malicious users.
ii. Reporting to CA and TTPA for possible incidents applied to clients’ data
such as sending data recovery reports as email alerts.
iii. Monitoring and managing security of the cloud computing infrastructure
for example network, virtualization, application, physical and middleware
layers security. However, this task is a research assumption. It is not
covered by the scope of SCSM implementation.
4.4 Components of SCSM
In order to preserve data confidentiality, integrity and to ensure the delivery of
trusted cloud storage services, SCSM is designed and developed by involving five
components which include multi-factor authentication and authorization process,
partial homomorphic cryptography, implementation of 256-bit SSL, SLA, and TTP
services which involve KM approach and data auditing process, as shown in Figure
4.2. Design, description, working mechanism, and contributions of each component
in formulating the SCSM are described in the following sub-sections.
78
Figure 4.2: Components of SCSM
4.4.1 Multi-factor Authentication and Authorization Process
This research considers identification of users who access the system and
controlling their access privileges as the initial as well as the most significant process
in preserving data confidentiality, integrity and ensuring the delivery of trusted cloud
storage services. SCSM incorporates a multi-factor authentication and authorization
process where the users are first authenticated by validating the username and
password. Secondly, users’ access is controlled using RBAC with CRSCG. This
SCSM
Multi-factor Authentication and
Authorization Process
RBAC with CRSCG
RSA Partial Homomorphic Cryptography
256-bit SSL
SLA TTP
Services
Key Management
Approach
Data Auditing Process
79
research enhanced the security of traditional RBAC with the new developed
algorithm of CRSCG for strengthening the system access.
Using SCSM, once a user for example Bob successfully logs into the system,
a controlled environment will be created that will allow him to perform all those
operations which are granted to his role. Besides ACM, SCSM is also based on
using an additional layer for securing the access to privileged tasks, for instance if
Bob wants to perform a task which is allowed for him according to the defined ACP,
but still he will be required to request a secret code for performing tasks such as
decryption, download, auditing, encryption or metadata generation, etc. A complex
12 characters random secret code will be generated by CRSCG and delivered to Bob
for performing privileged tasks. The construction details of RBAC and CRSCG for
SCSM are described in the following sub-sections.
4.4.1.1 Role-Based Access Control
The access privileges of users are controlled to provide separation of duties.
Depending on the requirements and suitability of the situation, in a cloud computing
environment several ACMs can be implemented such as MAC, DAC, and RBAC.
However, RBAC is suitable in cloud computing scenarios where users are clearly
separable according to their job responsibilities (Wei et al., 2012). Hence, this
research implemented RBAC to isolate the access privileges of CA, CSPA and
TTPA. According to security recommendations provided by Cloud Security Alliance
(CSA), data owner is responsible for enforcing access control policies whereas CSP
is responsible for their implementation (CSA, 2011). This research assumed that the
data owner together with its IT professionals and legal authorities will specify and
enforce the access control policies for each role by signing an SLA with CSP in the
presence of TTP, as shown in Figure 4.3. Due to the implementation of RBAC, each
role is allocated to specific tasks. This mechanism restricts the illegal interference of
80
an involved end user or an external adversary. For example, operations such as
decryption, download, and encryption of client’s data are only associated to CA,
whereas data recovery process is associated to CSPA, and conducting the data
auditing services is associated to TTPA. However, the task of viewing audit reports
is associated to all three involved roles. The users will not be able to exceed their
specified privileges due to implementation of RBAC with CRSCG.
Figure 4.3: RBAC Privileges
81
4.4.1.2 Complex Random Security Code Generator
CRSCG is the key component of SCSM which is based on an algorithm that
constructs a 12 characters complex random secret code such as o7@Uh1~Ew8$O
from the sets of numbers, special characters, upper and lower-case alphabets,
whenever requested by a privileged user to perform specific operations. The
generated code is returned to the corresponding user via email from the cloud server.
The pseudo-code of CRSCG algorithm is defined as follows:
Let upperAlpha be the set of all upper case alphabets | upperAlpha = {A...Z}
Let lowerAlpha be the set of all lower case alphabets | lowerAlpha = {a...z}
Let symbol be the set of special characters | symbol = {~,!,@,$,%,^,&,*,(,),?}
Assume R be the random factor
Integer variables C = 6, C2 = C / 2 and C3 = 0 - C2
For (Integer X = C3; X < 0; X++)
Begin:
Integer indexNumber = R.generateRandomNumber [1…26]
String lowercase = lowerAlpha [indexNumber]
Integer number = R.generateRandomNumber [1…10]
String numberString = number.convertToString
Integer indexNumberTwo = R.generateRandomNumber [1…11]
String specialCharacter = symbol [indexNumber]
Integer indexNumberThree = R.generateRandomNumber [1…26]
String uppercase = upperAlpha [indexNumberThree]
String secretCode = secretCode + lowercase + numberString +
specialCharacter + uppercase
End
82
As represented in the pseudo-code, due to the use of random factor, each time
an unpredictable secret code is constructed and sent to the corresponding user. The
complexity and efficiency of the code makes it not guessable by an adversary or a
code cracking tool as proved via the experimental results mentioned in Section 6.5.5
of Chapter 6. It is assumed that the generated code will be provided to the requesting
user via an isolated channel, for example a user may obtain the code using a token
generator device, smart phone application or through a Short Message Service
(SMS), this greatly depends on the facility available with the user. Considering the
research scope, time constraints, and implementation complexity of these delivery
channels, this research suggested the delivery of secret code to the user via email for
the sake of prototypical implementation. The generated code is also transmitted via
an encrypted SSL channel to protect from session hijacking MITM attacks. The
receiving user can perform desired and privileged operations if the requested code is
entered correctly. The life cycle of a secret code starts from its generation, and ends
when the user requests a fresh code or the session expires, i.e. when the user logs-
out. It is strongly recommended that in real life practical computing scenarios when
SCSM is developed and deployed at the industry level, the secret code must be
delivered using an isolated secure channel.
4.4.2 Partial Homomorphic Cryptography
Cryptographic operations such as encryption, decryption, digital signatures,
and hashing are extremely significant to protect data confidentiality as well as
integrity and to overcome non-repudiation issues, especially when data are stored or
shared with an untrusted CSP outside the premises of the data owner’s enterprise.
There are various cryptographic algorithms to protect the data. Implementation of
these algorithms may depend on efficiency or security. For example, symmetric
algorithms are more efficient than asymmetric algorithms, whereas in terms of data
security, asymmetric algorithms are widely accepted (Omar et al., 2012; Ayushi,
2010). Since the focus of this research is to protect the confidential data at the cloud
83
storages, this research implemented asymmetric algorithms for cryptographic
operations such as encryption, decryption, and data integrity verification. This
section covers the process of key generation, encryption and decryption techniques,
whereas the data integrity verification process is described in Section 4.4.5. The
RSA algorithm is used for encryption and decryption in this research due to its
resilient security and widely acceptance as mentioned by (Kalpana and Sudha, 2012;
Somani et al., 2010; Milanov, 2009). However, this research implemented the partial
homomorphic version of RSA algorithm. One of the limitations of cloud systems is
to enable the users to process their data on cloud while it remains encrypted (Hibo et
al., 2011), in order to perform this task, users are required to decrypt their data each
time prior to processing whereas using SCSM users can perform certain number of
operations on encrypted data.
Considering the SCSM computing cycle, in order to encrypt the
organization’s data files, after the successful authentication and authorization process
as described in Section 4.4.1, CA will generate a RSA based homomorphic private
and public key pair. Since the public key of CA may also be shared with other users
for performing encryption, the protection of private key is a vital task because it is
used for decrypting all the files which are encrypted with its paired public key. In
order to secure the key generation process, this research used a randomized key
generation method, where a random factor is used to generate two large prime
numbers (p, q) to be used in the creation of private and public key. This process is
mainly implemented to avoid the creation of same keys for other users who are
involved in using SCSM such as a CSPA or TTPA. Due to compromise of private
key, clients’ data can be stolen or confidentiality can be breached by an adversary.
The private key must be protected at each stage not only during the generation
process. It must be secure at storage, use, transfer and retrieval, as described by the
KM approach in Section 4.4.5. The key generation, encryption and decryption
process of SCSM are based on RSA partial homomorphic algorithm (Rivest et al.,
1978) with the addition of a randomized key generator. These processes are
described as follows:
84
After the generation of paired keys, CA will use the public key to encrypt data
files and private key to decrypt them. In RSA algorithm, decryption is normally the
reverse process of encryption but no one can gain the decrypted data without having
the actual private key. Using SCSM, unlike other cloud systems, CA is not required
to encrypt the data before sending it to the cloud storage, data will be encrypted
automatically while it is being uploaded but before it arrives at the cloud storage.
Data are encrypted in real-time directly from the stream and when arrives at the
cloud storage, it will be fully encrypted. Similarly for the decryption process, data
are not decrypted at the CSP’s site and then transferred to the client because this
could raise the concerns of confidentiality violation. Data will depart from cloud
storage as encrypted and it will be decrypted in real-time from the download stream.
The contents will be downloaded or viewed upon arrival at the CA’s machine. Users
of SCSM can also perform limited operations on the encrypted data. Assume that
CA has stored a file named as EMP.txt which contains the records of employees such
as name, address, salary, and commission rate, etc. CA can increase, modify or
Step-1: Generate two large random prime numbers p and q
Step-2: Compute n = p*q
-- n is used as the modulus for both the public and private keys.
Step-3: Compute ⱷ(n) = (p-1) (q-1)
-- ⱷ(n) is Phi Euler's Function.
Step-4: Choose e, 1 < e < ⱷ (n), and gcd (e, ⱷ(n)) = 1
-- (e and ⱷ(n) are co-prime) e is kept as the public key exponent.
Step-5: Find d = e ^ -1 mod ⱷ(n)
-- d is kept as the private key exponent.
Step-6: Encryption Process Compute c = m ^ e mod n
Step-7: Decryption Process Compute m = c ^ d mod n
Step-8 Terminate.
85
delete salary, and commission rate of an employee. The two demo queries are
implemented as follows:
Similarly, there can be various other operations that can be performed on
encrypted records. It will obviously be a great shift to the adoption of cloud storage
services if users are able to perform unlimited operations on the encrypted data. This
requirement can be achieved using Fully Homomorphic Encryption (FHE).
However, at present, it is not implementable in a cloud computing due to the extreme
complexity of its operations (Kui et al., 2012; Wang et al., 2013; Stefania et al.,
2012). This research considers the use of FHE as a future work to be implemented as
a part of SCSM. Using SCSM, the generated RSA keys are only used for performing
data encryption and decryption tasks, whereas the generated DSA public and private
keys are used for performing data integrity verification tasks, as described in Section
4.4.5.
Query-1: Increase the commission rate of an employee.
inputFactor = 2
encrypt(inputFactor, publicKey).
inputFactor.muptiply(commission_rate).
Query-2: Alter and update the salary of an employee to 1350.
inputFactor = 1350
multiplicativeFactor = 1
encrypt(inputFactor, publickKey).
encrypt(multiplicativeFactor, publickKey)
amount=inputFactor.mupltiply(multiplicativeFactor)
salary=amount.
86
4.4.3 256-bit Secure Socket Layer
Despite encrypting the data for protection at the cloud storages from malicious
insider and external threats, the entire communication between CA, CSPA, TTPA
and SCSM server should be secured from MITM attacks. Although using SCSM, in
order to preserve data confidentiality and integrity, users encrypt their data before
sending it to the cloud storage, but other significant parameters of the users such as
usernames, passwords, cryptographic keys, collaboration messages, requested secret
code and VMD must also be protected against the attacks of hackers during the
communication process. Considering certain standards such as PCIDSS or HIPAA,
the use of SSL is mandatory for ensuring data privacy (Ahmed, 2012; Bamiah et al.,
2012). In order to achieve these requirements, SCSM is based on implementation of
a 256-bit SSL 3.0 with the support of Transport Layer Security (TLS) 1.0, which
serves as an encrypted communication channel between the users and SCSM server.
Cloud systems are nowadays secured with 128-bit SSL encryption. However, this
research implemented the 256-bit SSL encryption as suggested by CSA, (2011) to
gain enhanced protection while using cloud services.
The SSL certificate is installed on the SCSM server which holds the private
key securely whereas the public key is shared with the clients during the initial
authentication process. Since SSL technology supports hybrid cryptography, using
the proposed SSL, two keys will be generated i.e. asymmetric RSA key of length
2048-bits and a symmetric key of length 256-bits. The RSA key is used for
encrypting the symmetric key which is used for encrypting the entire communication
channel. For instance, initially when a client browser requests the SCSM server for
connection, the server will send the SSL certificate with the public key as a response,
and then the client’s browser will verify authenticity of the SSL certificate, its
version, validity and the provider. The client side application will generate a
symmetric key which will be encrypted using public key of the server, and then it
will be returned to the server. Upon arrival, the server will decrypt the encoded key
using its other pair of private key, and the symmetric key will be generated. When
both client and server obtained the symmetric key, the entire communication
87
between client and SCSM server will get encrypted and decrypted securely without
being compromised to an adversary. The clients’ data files for storage are also sent
and received under the security of SSL. In this case file contents will be dual
encrypted since CA also encrypts the data by partial RSA homomorphic encryption.
SCSM server even after decrypting the SSL traffic containing the clients’ data files
cannot decrypt the actual file contents as it can be decrypted only using the private
key allocated to the CA. The use of SSL with partial homomorphic encryption
protects the data from compromise against malicious insider and outsider attacks.
4.4.4 Service Level Agreement
The adoption of cloud storage services for storing the confidential data
without considering an SLA is similar to renting a house without signing a contract.
This activity will leave both parties i.e. the tenant as well as the house owner
unprotected and it provides an unrestricted opportunity for them to be malicious by
not adhering to their responsibilities. However, under such situations tenant feels
more insecure because the owner has enhanced rights. For instance if tenant turned
to be malicious by breaking the house-hold items, owner can simply ask the tenant to
vacate the house or to pay the amount for breakage. Alternatively, if the owner is
failing to complete his responsibilities, the tenant cannot undertake any actions
besides leaving the house which may result in additional cost of migration and
disturbance of life.
These circumstances surface when there are no legal terms and conditions
signed which identify the roles and responsibilities of both parties as well as potential
penalties if anyone fails to meet the specified requirements or misuse their privileges.
In order to avoid unwanted situations, a legal contract must be signed by involving a
third party such as an agent who understands the country’s law and government
rules. The usage of house and provided service will be monitored by the agent as a
third party, so if either party does not meet their responsibilities they may face
88
penalties. This real life scenario clarifies the significance of legally signed terms and
conditions prior to adoption of service whether it is renting a house or cloud storage
capacity. Although, CSPs such as Amazon and Google also provide SLA for S3
cloud storage service but these are fixed, non-negotiable and unable to satisfy the
security as well as privacy requirements of organizations dealing with confidential
data to adopt cloud storage services (Asha, 2012).
In this research it is assumed that the CSP dealing with storage of mission
critical data, will provide assurance to the clients that their data always remains
protected using a well-defined Information Security Management System (ISMS) by
implementing strict security controls, procedures and policies. An ISM is a
systematic approach or framework to manage sensitive enterprise data so that it
remains secure. It helps small, medium and large businesses in any sector to keep
information and assets secure. It is recommended that the CSP will implement ISO
27001 that contains of a set of guidelines to create, implement, deploy and maintain
ISMS. It consists of 11 groups and 133 security controls, and it is widely accepted
standard to formulate a secure infrastructure. SCSM involves an effective SLA
which is based on core security and privacy SLA elements defined by Kevin and
Hanf, (2010). This SLA is designed to achieve the aim of assisting the organizations
dealing with highly sensitive data to adopt cloud storage services without
confidentiality and integrity concerns. This research assumed that the proposed SLA
will be signed by the data owner and the CSP in presence of TTP. In order to
consider the research scope, the proposed SLA is only focused to involve data
security and privacy requirements of an organization, other requirements such as
service availability and processing capabilities are not covered by this SLA.
However, this research assumed that these requirements will exist in the final SLA
according to CSP’s service capabilities. Since, it is impossible to cover up all the
detail of a complete ISO implementation for a cloud datacentre in this research, but
wherever applicable, the key elements of the proposed SLA i.e. Service Level
Objectives (SLOs) and Service Level Requirements (SLRs) are settled by
considering the compliance with ISO 27001 guidelines, but it is assumed in practical
scenarios, the entire computing infrastructure of the CSP and the provided services
will be in compliance with ISO 27001.
89
The proposed SLA is based on an assumption based scenario that a client
named as MXO Corp: is a PCI, wants to adopt cloud storage service from a CSP
SNB Corp: for leveraging data recovery, resilient availability and ubiquitous
accessibility features by involving JAS Corp: as a TTP. Key elements of SLA with
the corresponding objectives, settled approach and SCSM implementation
methodology are mentioned in Table 4.1. However, the elements which are related
to law, regulatory compliance or which involve a real cloud computing environment
as well as resources to be implemented, are beyond the scope of this research,
therefore those elements are addressed based on assumptions and not covered by the
SCSM implementation.
Table 4.1: Service Level Agreement
SLA Element Objectives Settled Approach SCSM Implementation
Methodology
SLA Context
To identify the
provider, client,
and the other
involved
stakeholders.
Clarify the SLA
background and
purpose.
Provider: SNB Corp: A well
reputed CSP.
Client: MXO Corp: A semi-
government PCI offering credit
and debit card services to a large
number of consumers.
TTP: JAS Corp: A certified and
trusted computer forensic
organization responsible for
auditing and monitoring cloud
services on behalf of MXO, as
well as responsible for assisting in
the KM approach.
Background: This SLA is a
negotiated agreement between
two parties, i.e. MXO and SNB.
In order to maintain the data
Three major roles will be
created using RBAC. The
privileged users include CA,
CSPA, and TTPA. MXO
enforces the RBAC policies
according to their
requirements and data
standards. The SNB will
implement the access
control policies.
90
security and privacy policies,
MXO must comply with
PCCIDSS when their data are
stored at cloud storage. They have
enforced their data security and
privacy requirements in this SLA.
MXO’s security requirements are
analyzed and agreed by SNB.
Purpose: To reduce the key areas
of conflict before they occur. In
order to ensure fairness in the
overall computing process, this
SLA will also act as a dispute
resolution system which will
clearly state the required actions
and penalties in case of violating
the settled agreements, terms or
conditions either from MXO or
SNB. The SLA must be reviewed
periodically by JAS to ensure the
compliance to stated agreement
policies.
Service
Description
To provide a
clear and logical
linkage of
overall service
capability and
offerings.
Storing: To store MXO’s data,
for example staff and customer
records at cloud storages for
backup, disaster recovery,
enhanced availability and
accessibility features.
Processing: To access, modify
and process data, while it remains
in secure format (encrypted).
Retrieving: To retrieve data in its
actual format (decrypted) and
with maintained integrity
whenever required by the MXO.
In order to securely process
data on the cloud, MXO’s
data will be encrypted using
asymmetric partial
homomorphic cryptography
which enables the CA to
perform a certain amount of
fixed privacy preserved
operations.
Data will be always
available for CA to
download in its actual
format.
91
Security
Management
To include a
description of
approaches that
CSP will
implement to
enhance data
security.
CSP’s Security Controls: SNB
must provide physical security to
avoid tampering of data from
outside or inside attackers.
MXO’s data must be secured and
isolated from other tenants at the
same cloud storage infrastructure.
Privacy Guarantees: Beside
security policies, the SNB must
guarantee privacy preserving
processing capabilities. MXO’s
data must not reveal to an
unauthorized party, not even the
CSP while at the storage or
during the process.
Data must not be visible to JAS
during the auditing process. No
one should be able to decrypt the
data beside CA.
Vendor Auditing Techniques:
SNB must allow external security
auditing by a third party vendor
selected from MXO in order to
monitor the maturity and
capability of security controls and
standards offered and used by
SNB to protect their data on
cloud.
Vulnerability Management:
SNB must scan for new
vulnerabilities in the system,
analyze and undertake necessary
steps to resolve identified
vulnerabilities.
The physical security of the
cloud datacentre must be in
compliance with ISO 27001
by implementing the
guidelines and security
controls discussed under the
section of physical and
environment security. It is
recommended that the
organization’s entire
premises and infrastructure
should be secure with
security guards, biometric,
card readers, intrusion
detection systems and
CCTV. The physical access
should be granted to the
employees by considering
their access privileges, and
all technical facilities must
be monitored together with
the facility of fire detection
system, robust power
supplies, and resilient
network design. This
research assumed that, in
order to provide physical
security for the cloud
infrastructure, there will be
also the use of TPM and
firewalls as well as anti-
malware will be installed.
Privacy of MXO’s data will
be protected by maintaining
data confidentially and
integrity. During the data
upload operation, data will
be encrypted on the fly, i.e.
from the stream in real-time.
92
Data Ownership, Protection
and Control: The data owner is
MXO, and all the data rights
belong to them. Data must be
encrypted with processing
capabilities and remains located
within local country. Data
backups must be stored at several
locations inside the country as
permitted by legal laws.
Similarly, while download
operation, data will be
decrypted from the stream
so no data will be visible to
an unauthorized party.
The communication
between the users will be
secured by implementing
256-bit SSL to avoid MITM
attacks.
This research assumed that,
SNB will discuss their
security policies applied for
MXO with the third party
vendor hired by MXO. The
third party vendor will audit
those security standards to
ensure that they are accurate
as per MXO’s requirements.
They will also analyze the
results of vulnerability
assessment and penetration
testing performed by SNB
in order to identify and
overcome system
vulnerabilities.
The SNB will not have any
right on MXO’s data and
their task will be to store
data in their cloud storage at
geographical locations
permitted by the country’s
data protection Act.
Roles and
Responsibilities
To identify the
clear delineation
of roles and
responsibilities
CSPA: An expert cloud
computing knowledge based user
hired from SNB with strict
interview and background checks.
All involved users are given
privileges as per their roles
and constraints set by the
MXO. No one can perform
93
of the involved
users.
CSPA will monitor the required
infrastructure security provided
for MXO’s data. CSPA is also
responsible for managing data
backups and assisting TTPA in
performing data recovery process.
CA: An intermediate cloud
computing knowledge based user
hired from MXO with strict
interview and background checks.
CA will manage all MXO’s
services including transferring,
processing, accessing and
retrieving the organization’s
sensitive data.
TTPA: An expert and certified
professional auditor hired from
TTP with strict interview and
background checks. TTPA is
responsible to conduct requested
auditing services on behalf of
MXO. TTPA is also responsible
for secure storage and retrieval of
CA’s cryptographic keys.
any operation out of their
privileges due to
implementation of RBAC
with CRSCG.
CSPA will provide the
required services by
considering the settled SLA.
MXO’s data will not be
revealed to TTPA and their
keys will be stored with JAS
as encoded for security
reasons. The TTPA will be
privileged to use VMD for
conducting audit process.
CA will be associated with
data operations such as
storing, processing, and
retrieving.
TTPA will perform data
auditing, and store CA’s
cryptographic keys with
redundant and secure
backups.
Reporting
Guidelines and
Requirements
To identify
agreements
regarding the
service
performance and
status of
reporting.
Service Failure: SNB is fully
responsible for service failure or
security breaches.
Audit Reports: Auditing reports
will be shared with CA and
CSPA, apart from TTPA.
Log File: This file will record the
overall use of service and
accesses to data and system. It
will be generated at the cloud
server and will be kept for
This research assumed that,
SNB will maintain backup
servers and storages in case
of service outage or failure,
so the service will continue
without any delay.
Once the auditing process is
completed, report will be
shared with CA and CSPA.
If there is any violation,
TTPA will request CSPA to
recover data immediately.
94
analyzing and detecting violations
in unwanted cases. Log reports
will be secured and can be
accessed only by the JAS.
Regulatory Compliance
Responsibility: The auditing
process must be compliant to
PCIDSS.
CSPA will maintain log
files to detect any unwanted
activities carried out by an
illegal authority. TTPA will
ensure that MXO’s data are
secured and regulatory
compliance is maintained to
PCIDSS.
Terms and
Conditions
To support a
clear
understanding of
business risks for
the cloud
computing
consumer.
Indemnification: SNB must pay
penalty to MXO in cases of data
breaches or violation, at-least the
lost or damaged data must be
repaired without any additional
cost.
MXO will be sued to pay penalty
if they blame SNB for their own
mistakes such as unintentional
data tamper, damage or fake
claims. (This research did not
specify fixed penalties because it
may vary for each organization).
Disaster Recovery: SNB must
prepare data backup at distributed
country wide locations in case of
natural disasters. Data must not
be violated or tampered. MXO’s
applications must resume within
24 hour time frame from another
cloud infrastructure with same the
security features, controls and
standards.
Exit Clause: SNB can exit from
the provided service when the
guarantees can't meet at numerous
times.
This research assumed that,
There will be secure
backups for MXO’s data at
multiple locations at-least
an additional storage beside
the local storage.
In cases of natural disasters
or data violations, data will
be securely retrieved from
the backup zone and
restored.
Whenever any party is
interested to terminate the
service contract by
following the legal
requirements, MXO’s data
will be returned without any
loss, damage, theft or cheat.
Data will be completely
removed from local and
backup storages and disks
will be accurately sanitized
using forensically sound
tools so that they do not
contain any hidden data that
could be recovered after
deletion. The penalty may
impose on any party
95
Ordinary Exit: If MXO’s
organization wants to terminate
their service with SNB due to
personal reasons, they must
provide 30 days pre-notification
with clear requirements if any
such as data-migration to another
CSP or to the organizations’
storage.
Exit on SLA Violation: MXO
has full privileges as well as
rights to withdraw from cloud
services without any pre-
notification and with no penalty,
if the SNB is unable to comply
with the agreements stated in the
SLA.
When MXO exits, all their data
must be returned in actual form
with maintained data integrity and
SNB must remove all their data
on storage disks and associated
backup devices at each location.
The TTPA must delete all
parameters of MXO on contract
exit. In case of data migration to
another CSP, the SNB must assist
MXO without any dishonesty.
illegally terminating the
contract.
96
4.4.5 Trusted Third Party Services
As described in Section 4.3, TTP will technically assist the client organization
in KM approach and data auditing process. TTPA is responsible for secure storage
of CA’s cryptographic RSA keys which are mainly used for data decryption and also
responsible for conducting requested auditing services to verify as well as monitor
the integrity of data stored at the cloud storage. This research enhanced the data
auditing process by overcoming the threats of a malicious TTPA and involving the
concept of detecting malicious users, and improved the security of KM approach
using sound steganography techniques. The proposed KM approach and data
auditing process of SCSM are described in the following sub-sections.
4.4.5.1 Key Management Approach
Encryption is an essential task to preserve data confidentiality especially when
data are stored at remotely located cloud storages. The security of an encryption
scheme is extremely dependent on protecting the corresponding decryption keys.
Cryptographic private keys must be protected against loss or theft (Soni and Soni,
2013). If the data owners lost their keys, they can never decrypt their data, or if the
keys are stolen by an adversary, it may lead to breach of data confidentiality (Sun et
al., 2010). Keys must be protected using the same security controls as the data itself
since theft of keys greatly refers to compromise of data (Jang-Jaccard et al., 2012).
Usually, in information security domain, a key escrow system is mainly used
to securely store the keys. There are two main reasons for using key escrow systems.
Firstly, it acts as a secure recovery system for protecting secret keys of the users in
cases of lost and theft. Secondly, it is used in scenarios when government authorities
such as NSA want to access the encrypted data of an organization for security
97
investigations by retrieving the required keys from the escrow system. Since key
escrow system is maintained by the third party organizations, clients will have no
knowledge when their confidential records are decrypted by unprivileged authorities.
Therefore if key escrow systems are used for the cloud storage services, clients those
who deal with mission critical data, they will not trust such systems due to privacy
concerns. Alternatively, the KM approaches, where keys are managed by the CSPs,
are also not accepted by the clients due to trust concerns (Janssen, 2010).
In order to overcome this issue, SCSM is based on involving a secure KM
approach which uses a key escrow system by considering the requirements and
implications of a cloud computing environment. The proposed KM approach is used
to protect the secret keys since these are most significant for protecting the data
confidentiality. The key escrow system is maintained by an independent TTP and it
is secured using hardware security mechanism i.e. TPM. The stored keys are not
accessible by an unprivileged authority besides the key owner. Using this approach,
the secret keys are not only protected during the storage but they are protected from
the generation until the destruction process. For example, when the CA generates
RSA pair of public and private keys for encryption and decryption operations, keys
will be initially stored safely in TED at the local machine of CA, after completing the
desired operations related to cryptography, CA will send the private keys to TTPA
for secure storage and delete them from the local storage due to security concerns
such as remote attacks, malware, viruses or the unauthorized access from malicious
insiders. TTPA will securely store as well as archive the keys of CA with enhanced
system protection using TPM and secure backups. This will overcome the client’s
concerns for losing the keys. Since this research considers TTPA as a semi-trusted
entity, so the proposed KM approach protects the private key of the client even from
the TTPA. When keys are transferred to TTPA for storage, besides the encrypted
SSL channel, SCSM will automatically encode the keys in (.wav) sound files using
sound steganography techniques and these files will be transferred to TTPA. Under
the protection of RBAC with CRSCG, access to decode the secret key sound files
will be limited to privileged users. For instance, CA is the only user to decode the
secret key sound file. Using this technique, the keys of data owner will be protected
from malicious users since the CSPA is also not able to access the CA’s keys. The
98
keys stored with TTPA will always be available for the CA to retrieve and decode
whenever required to use them. Normally steganography is used for data hiding, but
this research mainly implemented sound steganography as a protection approach to
hide the clients’ secret keys from malicious entities. The effect of using sound
steganography on system performance was analyzed experimentally. It was
identified that encoding and decoding of the sound file takes approximately 3 to 5
seconds which is similar to browsing a webpage, hence users will not experience
delay during the KM process, and this process does not have negative affect on the
overall system performance. In order to ensure enhanced protection, this research
recommends the key holder to periodically change the keys after each 30 day cycle
and it is also the research assumption that whenever requested by CA, TTPA will
destroy the keys from primary and all backup storages using software such as Darik’s
Boot and Nuke (DBAN) which will delete the keys permanently and make the
storage locations as unrecoverable.
4.4.5.2 Data Auditing Process
As a user of SCSM, along with the keys CA also generates VMD for the files
stored at the cloud storage using DSA with SHA-1 and sends it to the TTPA with the
request of initiating the data auditing process. If CA has already sent the VMD, then
it needs to be updated whenever stored files are modified or enhanced with new data.
Similar to RSA cryptographic keys, VMD is also transferred as encoded in a (.wav)
sound file to protect against potential attacks. This sound file can be decoded only
by the TTPA to perform data integrity verification. VMD of each file is generated
using two methods, i.e. File and Block level hash calculation. When CA selects a
stored file for generating data hash, it will be automatically divided by server in N
number of blocks according to its size. For example, if the file F is of size 8 MB, it
will be divided in four equal chunks F1, F2, F3, and F4 each of size 2 MB. The
generated DSA private key will be used internally to calculate VMD by signing the
99
data, and copy of the associated public key will be transferred to TTPA for data
integrity verification.
The extracted VMD will be stored at machine of TTPA and then it will be
used to compare with the new digital signatures calculated for a particular file and its
blocks using the received DSA public key. If corresponding VMD and digital
signatures are equal means data are intact else it has been violated. When the
process of data integrity verification is accomplished, TTPA will generate audit
reports, and these reports will be automatically shared with CA as well as CSPA.
TTPA will also analyze the reports to take appropriate actions. For instance when
data integrity is verified successfully, TTPA will send success signal to CA as an
email alert which will specify the CA to move further with other operations or to
download the files, since they are safe with well-maintained integrity.
Alternatively, if the data integrity violation is detected, auditing reports will
clearly indicate the particular number of blocks that have been violated which will
enable efficient recovery of data since there will be no need to recover the non-
violated contents, i.e. the entire file, if only one or two blocks have been violated.
TTPA will issue an alert to CSPA for recovering data by considering the auditing
reports and will issue a signal to CA indicating the data integrity violation. CA will
not move further with other processes until data are recovered back to its previous
state. CSPA will recover the data from backup-zone cloud storage to its original
state. If data are recovered successfully, TTPA will ensure the data recovery results
by reinitiating the data integrity verification and if data are intact, TTPA will send
success signal to CA. However, if violated data are not recovered back, TTPA will
view the access records log files to identify the malicious user which may be either
client or the CSP.
Normally, in a cloud computing environment CSPs are considered as the most
untrusted entities because they have a greater degree of control on clients’ data.
However, there are certain cases where clients may be malicious as well. For
100
example when signing an SLA, both parties settle the possible penalties when
clients’ data are lost or confidentiality and integrity are breached. Some malicious
clients may claim penalty from CSPs for fake SLA violations in order to leverage
penalty amount or free service which is provided as a result of not complying with
the SLA requirements. Malicious clients may report loss of data without even
sending it to the cloud storage or they may report integrity violations after
intentionally modifying the data. The ongoing research in the field of cloud
computing for example the contributions proposed by Lin et al. (2012), Nkosi et al.
(2013) and Ahmed and Raja (2010), are mostly targeted towards protecting the
clients rights against the malicious CSPs.
Figure 4.4: Access Logs Report
This research provides a malicious user detecting solution to protect both
entities such as the client from the malicious CSP and vice versa. Using SCSM, it
can be easy to detect malicious users. For instance, if CA logs-in to the system, the
principal name of the user, i.e. the client admin will be immediately recorded in the
access record file maintained stored at the cloud server. Each activity or operation
101
performed by users will be recorded with its complete details such as date, time, user
role name, and the activity performed. A selected snapshot of access records log file
is shown in Figure 4.4. This research assumed that only TTPA will be privileged to
download or view the access record file if any fake claims are reported by the client
or if CSP has really breached the compliance to SLA. The details mentioned in the
file will assist the TTPA to identify the incident and suggest appropriate actions to be
taken as decided in the SLA. The access record file will remain stored at a highly
secure and isolated location of the cloud server which is not known to the CSPA
considering the separation of duties. This is also a research assumption that the
integrity of access records log file is protected using the TPM. This solution will
ensure the existence of a fairness policy for both involved parties. After the TTPA
detects the user who was responsible for data integrity violation, appropriate actions
will take place by considering the penalties specified in the SLA.
4.5 Process of SCSM
SCSM is based on combining all the designed components to protect data
confidentiality as well as integrity at the cloud storages and to ensure delivery of
trusted services to the clients. Previous section discussed the role of each component
to formulate SCSM. The entire process of SCSM involves generation of
cryptographic keys, encryption and decryption, partially homomorphic data
processing, VMD generation, data auditing, data backup and recovery, data upload
and download, tasks. The workflow of SCSM is represented in Figure 4.5, which
clearly shows, how users are interacting and performing their privileged tasks. The
description about each task involving users’ interaction together with working
mechanism of each component to develop a secure data confidentiality and integrity
preserved trusted cloud storage is provided in Chapter 5 with greater details using the
implemented system screen shots.
102
Figure 4.5: Process of SCSM
103
4.6 Summary
SCSM is comprised of five components which work together to formulate a
trusted cloud storage which preserves clients’ data confidentiality and integrity. It
facilitates the clients to upload their confidential data at cloud storages and decrypt or
download it with well-maintained integrity. It works strictly by considering the SLA
that is settled among the clients and the CSP, which is consistently monitored by the
TTP to ensure the delivery of trusted and secure services. Business organizations
can leverage the benefits of unlimited capacity, resilient availability, disaster
recovery, cost-effective storage and scalability by transferring their confidential data
to the cloud storages based on SCSM. The successful implementation of SCSM at a
real industry level will also benefit existing CSPs to offer secure remote cloud
storage services by achieving clients’ trust, since by using SCSM clients have greater
degree of control on their confidential data where only privileged users are allowed
to perform significant operations.
104
CHAPTER 5
IMPLEMENTATION OF THE SECURE CLOUD STORAGE MODEL
5.1 Introduction
Software implementation is the process of creating software by writing
collection of computer programs. This activity usually begins with in-depth research
or general understanding of the system and the user requirements. It typically
involves one or more computer programmers to write-up source codes using number
of different tools, techniques, applications, and programming languages. During and
after the code has been created, a great deal of testing is typically involved to ensure
that the program runs accurately as per the requirements and it is free from errors,
bugs as well as glitches. Software designers also play a critical role in the entire
development process since they construct building blocks of the system to be
implemented using various approaches such as Unified Modelling Language (UML)
diagrams (Bourque and Fairley, 2014; Pressman, 2010)
Since this research was undertaken to contribute in field of SE, the
development of SCSM was essential task to examine its practical functionality,
strengths and to determine that it achieves its goal effectively. SCSM was developed
by implementing and integrating its isolated components as an aggregate system.
105
After analyzing the requirements of SCSM, its building blocks were
described, and it was designed using use-case, architecture, and sequence diagrams.
This activity played an integral role in the implementation process. The software
development initiated with writing-up the source code using NetBeans with support
of various programming languages and frameworks which include Java, JSP,
XHTML, and servlets with glassfish server. In order to assist the evaluation process,
SCSM was implemented on a local platform by considering the requirements of a
cloud hosting environment. Following the successful completion of system
development process, SCSM was deployed on a cloud platform where it was hosted
on a Linux-based VPS which contains CentOS.
SCSM prototype was ubiquitously and pervasively accessible via the website
www.utmcloudstorage.com. It is the research assumption that in real-life scenarios,
SCSM will be provided from the CSP, but it will be validated by an independent
third party organization to ensure that the system does not contain any hidden
software bugs, errors, glitches or vulnerabilities such as storing their keys or
metadata, as this may raise major confidentiality and integrity concerns. This
chapter presents the implementation and deployment details of SCSM. The system
workflow is also defined using an assumption based scenario which demonstrates
each activity performed by the users with practical system snapshots for deep
understanding and analysis. The remainder of this chapter is organized in four
sections. Section 5.2, describes the software development process of SCSM.
Section 5.3, describes the systematic workflow of SCSM. Section 5.4, describes the
deployment details of SCSM. Section 5.5, presents the summary of this chapter.
106
5.2 Software Development Process of SCSM
The multi-factor authentication and authorization process was implemented
initially by writing a source code for basic HTTP based authentication process, and
secondly by implementing RBAC with CRSCG. The username and password are
sent to the server as Base64-encoded text for verification over the internet under the
protection of configured SSL. The information about system users is stored in a
database known as realm. A realm is actually a security policy domain defined for a
web or an application server. It contains a collection of users who may or may not
be assigned to a group (Oracle, 2013). When a user requests for a protected resource
such as the link of SCSM i.e. www.utmcloudstorage.com/encryption.jsp, the web
server returns a dialog box that requests the username and password. The end user
submits the username and password to the server which authenticates the user in the
specified realm, and if user authentication is successful, server returns the requested
resource as shown in Figure 5.1. The authorized users of SCSM are assigned to the
roles such as Client, TTPA, and Cloud Administrator. These roles are created in the
ACP using glassfish server where each of these roles is assigned with certain
privileges. These roles are further mapped to the groups defined in the application’s
deployment descriptor glassfish-web.xml file, as shown in Figure 5.2.
Figure 5.1: HTTP based Authentication
107
Figure 5.2: Role Mapping
Moreover, SCSM application is implemented with declarative security
annotations such as @DeclareRoles, @RolesAllowed, @DenyAll, and @PermitAll
used in the source code. These annotations clearly specify methods of the enterprise
beans class accessed by the specific authorized users. The access to perform any
operation in SCSM application is controlled at the task level using @RolesAllowed
annotation. The example of this process is shown in Figure 5.3 which shows the
declaration of all the roles associated with the system, and the users permitted to
perform the tasks of data encryption, as well as view auditing reports.
All three roles are declared at the class level, and access to these roles is
initially denied by default. Furthermore, it is specified upon each method that which
roles are allowed to invoke them. As shown in Figure 5.3, only client role is
privileged to invoke encrypt data files method. When this method is invoked, it
executes the JSP link that contains the entire business logic of data encryption
process. The method view auditing reports is declared to be accessed by all the roles
because all the users are privileged to view the data auditing reports. Besides the
108
RBAC, the second layer of authentication in SCSM is CRSCG, which is developed
by implementing the algorithm mentioned in Section 4.4.1.2 of Chapter 4.
Figure 5.3: Roles and Security Annotations
The process of encryption and decryption was developed by implementing
homomorphic RSA algorithm using java source code. Initially, two large prime
numbers were calculated for the variables p and q of type BigInteger class. These
values were further used to generate private and public keys. The public key was
used to encrypt the data, and the private key was used to decrypt the data.
Performing decryption in RSA algorithm is actually the reverse process of encryption
109
but data can never be decrypted without the actual private key that was generated as
a pair with the public key. The source code snippet for key generation, encryption,
and decryption processes implemented in SCSM system is shown in Figure 5.4.
Figure 5.4: RSA Partial Homomorphic Cryptography
In order to process the data when it remains encrypted at the cloud storage, the
input provided by a user will be encrypted using the encryption formula, and then it
will be provided to an appropriate servlet to perform the desired computation. When
the task is completed, the results will be stored as encrypted, and it will be decrypted
prior to display using the decryption formula. The auditing process was developed
by implementing the algorithms of DSA and SHA-1. The user’s data file is divided
in number of blocks, digital signature for each block is generated and its integrity is
verified when data auditing service is performed. The code snippets for metadata
generation and data auditing process are shown in Figures 5.5 and 5.6.
110
Figure 5.5: Metadata Generation
Figure 5.6: Metadata Verification
111
The KM approach of SCSM aims to protect the private keys from the
generation until destruction phase. During the development of this component,
mainly sound steganography techniques were implemented to encode the private
keys inside small sound files. The access to decode the private keys is guarded by
the multi-factor authentication and authorization process. In order to guarantee the
safe storage of keys, only the clients or key owners are able to decode their private
keys from the sound files. The keys are secured during the transfer and retrieval
process due to the installed SSL. Since the configuration of SSL was a post-
development task so it is discussed in Section 5.4 of system deployment. A code
snippet used to develop sound steganography encoding component in SCSM is
shown in Figure 5.7. During the development process, the key elements of the SLA
related to data protection, access control, data auditing, data ownership, and KM
approach were implemented whereas certain elements such as those related to law or
regulatory compliance are beyond the scope of SCSM development, and they are
addressed as research assumptions.
Figure 5.7: Sound Steganography
112
5.3 Systematic Workflow of SCSM
The process of accessing SCSM system initiates from visiting its website
homepage where a user will click on the login button that will trigger the first step of
authentication and authorization by requiring and validating the provided username
and password. Initially, when a user successfully logs into to the system, a welcome
page will be displayed with principal name and the role of that particular user
indicating all the system operations, as shown in Figure 5.8. Three roles such as CA,
TTPA, and CSPA were created using glassfish server file realm security domain and
controlled using RBAC. Each role is linked to a different group which specifies its
required access privileges. The access controls to perform operations by a privileged
user are enforced using Enterprise Java Beans (EJBs) and java security annotations.
System identifies the user by the principal name and role, and then automatically
creates a safe controlled computing environment according to the permitted
privileges.
Considering the session created in Figure 5.8, users can perform only valid
operations which are specified according to their roles, as shown in Figure 4.3 of
Chapter 4. However, for enhanced security, performing each operation is further
controlled by CRSCG where a user is required to request for a secret code and
operation can be performed only when the retrieved code is entered accurately.
SCSM system enables the users to perform seven major tasks which include data
transfer and retrieval, encrypted data processing, VMD generation and secure
transfer of parameters, data integrity verification, data recovery, private key retrieval
and data downloading. The implementation mechanism and functionalities of SCSM
are systematically described in the following sub-sections by considering an
assumption-based scenario which focuses on a user to upload a file at the cloud
storage while preserving its confidentiality and ensuring its integrity.
113
Figure 5.8: Operations of SCSM
5.3.1 Data Transfer and Retrieval
The orderly process of SCSM system starts from the key generation operation
for uploading as well as encrypting data. Let us assume that the CA is logged-in to
the system and ready to upload a file named EMP.txt containing employees’
confidential records, to the cloud storage. After generating the RSA public and
114
private key pair, SCSM server will require the chosen file and the public key for
encryption process since the data needs to be homomorphically encrypted from the
upload stream. The process of encryption is shown in Figure 5.9.
Figure 5.9: Encryption Process
Since all transactions cannot proceed unless accurate secret code has been
provided by the authorized user, so for the encryption process, CA will obtain the
secret code either by requesting the server or using a token generator device as
discussed in the Section 4.4.1.2 of Chapter 4. Upon the successful entry of the secret
code, system will perform the operation else the request will be rejected. When
server successfully validates the provided secret code, and authorizes the privileges
of the client, the file EMP.txt will be fully encrypted and renamed as Encrypted.txt
when it is saved at the cloud storage. Alternatively, for data decryption, besides the
115
secret code, the CA will request the server by providing an appropriate private key,
and server will apply that key to decrypt the data. Upon the successful authorization,
each time server will retrieve and decrypt an attribute from the file until all the
contents are successfully decrypted. The process of data decryption is shown in
Figure 5.10.
Figure 5.10: Decryption Process
5.3.2 Encrypted Data Processing
Since the data was encrypted with RSA partial homomorphic cryptography,
CA can also perform limited number of transactions without actually decrypting the
116
file contents. The possible transactions performed are shown in Figure 5.11. For
instance, if CA wants to increase the salary of an employee, in order to process a new
input factor with an existing encrypted attribute, i.e. Salary, the new input factor
must be encrypted first, then both factors can be calculated, so RSA based public key
of the CA will be used for updating data and private key will be required if CA
requires to decrypt data after the computation process.
Figure 5.11: Data Processing
117
5.3.3 Verification Metadata Generation and Secure Transfer of
Parameters
When the desired transactions are performed and the changes are committed,
CA will move further to the process of data auditing service that will be achieved
with the assistance of TTPA. However, prior to this activity, CA will generate VMD
for the stored file. During this process, a pair of DSA based private and public key
will be generated internally, where the private key will be used with SHA-1 to
calculate VMD, and the public key will be transferred to SCSM server, as shown in
Figure 5.12.
Figure 5.12: VMD Generation and Transfer Process
118
Digital signatures will be calculated for entire file and its each block. Data
confidentiality will remain preserved during the entire auditing process since the file
stored on the cloud is encrypted. Immediately after the completion of this process,
CA will send the significant parameters such as VMD, i.e. all the digital signatures
and RSA based data decrypting private key to TTPA as two separate (.wav) sound
files encoded using sound steganography techniques. Updating the existing or
sharing of new VMD with TTPA is just a single click activity, as shown in Figure
5.12.
Considering the proposed KM approach, CA sends the private key to TTPA
by performing the operation specified as Send Keys for Secure Storage, as shown in
Figure 5.1. Regarding the KM approach, responsibilities of TTPA are to store the
private key of CA with reliable backups and under strict system security. CA will
remove the private key from local storage and initiate a request for the data auditing
process. The client’s private key and the VMD are protected against the malicious
security threats by encoding them using sound steganography techniques. Both these
parameters are encoded in separate sound files. Considering the permissions of the
SCSM users as shown in Figure 4.3 of Chapter 4, the privilege to decrypt the sound
file containing the private key is only granted to the client, no one else can decode
this sound file since it is protected using RBAC and CRSCG, whereas the TTPA can
decode the sound file containing VMD because they require metadata to perform
data integrity verification process as described in the Section 4.4.5.2 of Chapter 4.
5.3.4 Data Integrity Verification
Since TTPA requires VMD of the file which requires auditing, after the
successful login, the first task of TTPA is to decode the sound file containing VMD
and then proceed with the auditing session. The processes of decoding the required
sound file and data auditing are shown in Figures 5.13 and 5.14 respectively.
119
Figure 5.13: VMD Decoding Process
Figure 5.14: Data Auditing Process
120
TTPA will download the generated VMD to personalized local storage and
proceed with the auditing process by providing it to the SCSM server. Fresh VMD for
the stored file will be computed using DSA based public key of the CA with SHA-1 and
it will be compared with the existing metadata updated by the CA. Based on the
auditing results, SCSM server will create and share the auditing report among the
involved users. Since the file was securely saved with required integrity, the reports
indicated that integrity of the file and its each block is well maintained, as shown in
Figure 5.15.
Figure 5.15: Auditing Report
Alternatively, if file integrity is violated by an adversary, for example, Figure
5.16 shows that a malicious user has modified the first block of file intentionally or
unintentionally by replacing an encrypted attribute with garbage characters such as XX.
The auditing report clearly indicates the integrity violation and its location in the file,
which is block-1, as shown in Figure 5.17. TTPA will request the CSPA to overcome
the violation and recover the data back to its original state.
121
Figure 5.16: Data Integrity Violation
Figure 5.17: Auditing Report After Violation
122
5.3.5 Data Recovery
After viewing the auditing report, CSPA will login to the system, initiate the
data recovery process and issue a request to SCSM server for recovering the block-1,
as shown in Figure 5.18. With the successful recovery, CSPA will alert TTPA to
restart the auditing process in order to ensure that data status is intact. The auditing
reports after recovery indicate that integrity is well maintained, as shown in Figure
5.19, and there is no data loss. At this stage, TTPA will inform CA to proceed with
other operations such as uploading additional data files, updating existing records or
downloading the file permanently.
Figure 5.18: Data Recovery Process
123
Figure 5.19: Auditing Report after Data Recovery Process
5.3.6 Private Key Retrieval and Data Downloading
As described in Section 4.4.5.1 of Chapter 4, that after performing the
required operations, CA will send the private keys to TTPA for secure storage and
delete them from the local storage due to security concerns such as remote attacks,
malware, viruses or the unauthorized access from malicious insiders. At the later
stages, when the keys are required, CA will retrieve the sound file from TTPA and
request the SCSM server to extract the private key from it, as shown in Figure 5.20.
CA will use that private key to decrypt the data in real-time from the stream while it
is being downloaded, as shown in Figure 5.21. When selected files are downloaded
and saved, CA may terminate the process or proceed with uploading other files.
124
Figure 5.20: Private Key Decoding Process
Figure 5.21: Data Retrieval Process
125
5.4 Deployment of SCSM
Software deployment engineering is the process of making a system available
to use for the clients or an organization. At this phase of SE life cycle, deployment
engineers are responsible for performing various tasks such as setting up servers and
platforms, installing software, and managing the required hardware. In this research,
SCSM prototype was deployed using glassfish server interface at a practical cloud
computing infrastructure. The developed system was hosted at the eApps cloud
computing platform where it was running on a VPS using Linux CentOS. The eApps
infrastructure runs several VPSs where each one has its own OS that executes the
hosting software for a particular user. The hosting software can include web servers,
database management systems, file transmission protocols, mail servers and
specialized applications for activities such as e-commerce and blogging. At the
eApps infrastructure, VPS runs in a state of the art cloud hosting environment with
advanced capabilities for self service, scalability, and automatic recovery from
hardware failures (eApps, 2014).
Usually, the deployment of an application using glassfish server has two
stages i.e., assembly and deployment. Assembly is the process of combining discrete
components of an application or module into a single unit that can be installed on an
application server. The glassfish server assembly process conforms to the customary
java enterprise edition specifications. Deployment is the process of installing an
application or module on glassfish server optionally by identifying the location-
specific information such as a list of local users that can access the application, or the
name of the local database. The glassfish server deployment tools expand the
archive files into an open directory structure that is ready for the users. The glassfish
server supports two kinds of deployments mechanisms which include application and
module based deployments (Oracle, 2013), the SCMS system was deployed using the
module based deployment process as shown in Figure 5.22.
126
Figure 5.22: Module based Deployment Using Glassfish Server
(Oracle, 2013)
In module based deployment process the collection of servlets, JSPs, JSP tag
libraries, EJBs, HTML pages, utility classes, annotations, and web deployment
descriptors were assembled into a web application archive file which was then
deployed to the application server. Glassfish server was also installed and
configured at the remote VPS to handle all requests of the users and to perform the
necessary tasks as per the implementation logic. Industry standard 256-bit SSL
certificate from AlphaSSL was installed at VPS to secure the communication among
127
the involved users and the SCSM server. SSL was configured for the security of the
entire domain www.utmcloudstorage.com. For instance when data are uploaded,
downloaded or significant parameters are secured from MITM attacks since the
entire communication channel is encrypted. When an SLA is settled between the
client and the CSP, the provided service must be in compliance with the SLOs.
Client must be served with the expected level of service. Considering the scope,
resources and the time constraints of this research, SCSM was not implemented to
completely follow the SLA because some requirements of the SLA require a real
cloud datacentre to be implemented, for example the deployment of ISO 27001
guidelines. Therefore, this research recommends and assumed that when SCSM is
deployed at the industry level by a CSP, it must be fully in compliance with all the
elements of the signed SLA.
5.5 Summary
SCSM was implemented using several programming languages and
supportive frameworks. It was deployed on a cloud computing infrastructure to
determine its practical usability as well as functionality. There are three main users
involved in the usage of SCSM. For instance, CA can perform tasks such as sending
and retrieving data, processing the encrypted records, generating and sending VMD
together with cryptographic keys to TTP for secure storage and data auditing
services. TTPA can perform tasks such as data integrity verification and reporting to
CA whereas CSPA will assist TTPA for recovering the violated data from backup
cloud storages. SCSM was deployed prior to the evaluation stage in order to assist
the research to gain authentic evaluation results while it resides in the actual cloud
computing domain. SCSM system will operate completely by considering the
security requirements of a data owner as specified in the SLA. It is also assumed by
this research that SCMS will be verified, evaluated and approved by independent
third party testers to eliminate hidden software based vulnerabilities, bugs, errors,
and glitches.
128
CHAPTER 6
EVALUATION AND RESULTS
6.1 Introduction
This chapter describes the evaluation methods used for verifying the strengths
of SCSM to achieve its intended goal, i.e. preserving data confidentiality and
integrity of sensitive data stored at the remote cloud storages and to ensure the
delivery of trusted services to the clients. The research evaluation strategy is
categorized in two major stages where initial phase focused on evaluating each
component of SCSM and next phase focused on evaluating its entire process. The
implementation of RSA partial homomorphic cryptography, Multi-factor
authentication and authorization process, SLA, 256-bit SSL protocol, TTP associated
services including KM approach as well as data auditing process, and the entire
process of SCSM with its developed prototype, are evaluated using appropriate
methods. The evaluation mechanism and the obtained results are critically analyzed
and discussed in this chapter.
The remainder of this chapter is organized in five sections. Section 6.2,
illustrates the overall evaluation strategy of this research. Section 6.3, defines the
evaluation process and the results for each component of SCSM. Section 6.4,
discusses the evaluation method and results for the aggregated process of SCSM and
its prototype. Section 6.5, describes the advantages of SCSM over the related work
contributions and its benchmarking with the industry as well as academia
129
implemented cloud storage models. Section 6.6, presents the summary of this
chapter.
6.2 Evaluation Strategy of Research
SCSM and its all components are evaluated using heterogeneous methods.
For example 256-bit SSL is evaluated using Qualys web-based evaluation
methodology, the security of RSA partial homomorphic cryptography algorithm is
evaluated using mathematical formula. TTP associated services such as KM
approach is evaluated using compliance recommendations, and data auditing process
is evaluated using system security analysis. The multi-factor authentication and
authorization process as well as the SLA are evaluated using the survey. Finally the
overall SCSM process is evaluated using the Skipfish application security scanner
and survey, as shown in Figure 6.1.
130
Figure 6.1: Evaluation Strategy
6.3 Evaluation and Results of SCSM Components
This section describes the evaluation process and results analysis for each
component of SCSM. The evaluation and results for aggregated SCSM process and
the developed prototype are mentioned and discussed in Section 6.4
SCSM
256-bit SSL
RSA Partial Homomorphic Cryptography
TTP Services
Key Management
Approach
Data Auditing Process
Multi-factor Authentication and
Authorization Process
RBAC with CRCSG
SLA
Qualys
Evaluation
Methodology -
Section 6.3.1
System Security
Analysis -
Section 6.3.4
Mathematical
Formula -
Section 6.3.2
Compliance
and Auditing -
Section 6.3.3
Survey -
Section
6.3.5.3
Survey -
Section
6.3.5.2
Skipfish
and Survey
- Section
6.4
131
6.3.1 Qualys Web-Based Evaluation Methodology
SSL is a standard protocol for encrypting the network communication
channel. However, it may incur vulnerabilities due to inappropriate implementation
techniques or server misconfiguration. A vulnerable SSL can be easily compromised
by a MITM which may result in session hijacking and similar sort of attacks. The
SSL implemented for SCSM server is evaluated using well-defined methodology of
Qualys SSL labs which assigns a grade to installed SSL server based on the final
score achieved considering the evaluation results. The Qualys approach of SSL
evaluation consists of three major steps which include SSL certificate inspection,
inspection of server configuration, and final score with grade assignment (Ivan et al.,
2013).
6.3.1.1 SSL Certificate Inspection
Server certificate is the weakest point of a SSL server configuration. A
certificate that is not trusted, i.e. is not eventually signed by a well-known certificate
authority fails to prevent MITM attacks and renders SSL effectively useless. A
certificate that has expired, risks the security of the entire network. Considering
these requirements, a SSL certificate is assigned a score from 0 to 100%. However,
any of the following certificate issues immediately result in a zero score.
Domain name mismatch.
Certificate not yet valid.
Certificate expired.
Use of a self-signed certificate.
Use of a certificate that is not trusted.
Use of a revoked certificate.
132
The certificate inspection of the SSL implemented for SCSM server has
achieved the score of 100% because it does not contain the above listed
vulnerabilities, as shown in Figure 6.2. The installed certificated is valid as well as
issued by a trusted certificate authority known as AlphaSSL, and it is assigned to the
domain www.utmcloudstorage.com. The certificate inspection results are
individually shown in Figure 6.3 and aggregately shown in Figure 6.7.
Figure 6.2: Implemented SSL Certificate Details
133
Figure 6.3: SSL Certificate Inspection
6.3.1.2 Server Configuration Inspection
This inspection is based on testing three categories of SSL certificate which
include protocol support, key exchange, and cipher strength. Protocol support refers
to the number of protocols supported by the implemented SSL. It is assigned a final
score by starting with the score of the best supported protocol, adding it with the
score of the worst supported protocol and finally dividing the result by 2. The scores
of protocols used at a SSL server are provided in Table 6.1.
134
Protocol support = (Score of best supported protocol + Score of worst supported
protocol) / 2 (6.1)
Substituting the values in the equation (6.1)
Protocol support = (90% + 80%) / 2
Protocol support = 85%.
Table 6.1: Protocol Support Rating Guide
Protocol Score
SSL 2.0 20%
SSL 3.0 80%
TLS 1.0 90%
TLS 1.1 95%
TLS 1.2 100%
Since the SCSM server is configured using SSL 3.0 together with the support
of TLS 1.0, the resultant score will be obtained using the following computation
process:
The generated results are also shown individually in Figure 6.4, and
aggregately shown in Figure 6.7.
135
Figure 6.4: Protocol Support
The key exchange phase serves two functions. One is to perform
authentication, allowing at least one party to verify the identity of other party.
Second is to ensure the safe generation and exchange of secret keys that will be used
during remainder of the session. Since asymmetric cryptography is used for key
exchange, the security of this process depends on the strength and size of the private
key. The rating and scores for various key sizes are identified in Table 6.2.
Table 6.2: Key Exchange Rating Guide
Key Exchange Aspect Score
Weak key (Debian OpenSSL flaw) 0%
Anonymous key exchange (no authentication) 0%
Key length < 512 bits 20%
Exportable key exchange (limited to 512 bits) 40%
Key length < 1024 bits (e.g., 512) 40%
Key length < 2048 bits (e.g., 1024) 80%
Key length < 4096 bits (e.g., 2048) 90%
Key length >= 4096 bits (e.g., 4096) 100%
136
The proposed SSL contains a RSA key of size 2048 bits, so it is awarded with
90% of score, as shown in Figures 6.5 and 6.7.
Figure 6.5: Key Exchange
The final inspection of server configuration phase is the testing of the cipher
strength of the configured SLA. To break a communication session, an attacker will
attempt to break the symmetric cipher used for the bulk of communication. A
stronger cipher allows for stronger encryption and thus increases the effort needed to
break it. Because a server can support ciphers of varying strengths, the score of used
cipher is calculated from starting with the score of the strongest cipher, adding it to
the score of the weakest cipher and dividing the result by 2. The scores of various
ciphers used by SSL protocols are identified in Table 6.3.
137
Cipher strength = (Score of strongest supported cipher + Score of weakest
supported cipher) / 2 (6.2)
Substituting the values in equation (6.2)
Cipher strength = (100% + 80%) / 2
Cipher strength = 90%.
Table 6.3: Cipher Strength Rating Guide
Since the implemented SSL uses cipher key of 256-bit together with the
support of 128-bit key. The resultant score of cipher strength will be obtained using
the following computation process:
The generated results are also shown individually in Figure 6.6, and
aggregately shown in Figure 6.7.
Cipher Strength Score
0 bits (no encryption) 0%
< 128 bits (e.g., 40, 56) 20%
< 256 bits (e.g., 128, 168) 80%
> = 256 bits (e.g., 256) 100%
138
Figure 6.6: Cipher Strength
6.3.1.3 Final Score and Grade Assignment
Using the Qualys approach of evaluation, the installed SSL will be provided
with a suitable grade that will identify its security strength. This grade will be
assigned by considering the combination of the final obtained score in all three areas
of server configuration inspection apart from the certificate inspection. The scoring
criteria are specified in Table 6.4 and grading translation is mentioned in Table 6.5.
Table 6.4: Evaluation Criteria
Category Score
Protocol Support 30%
Key Exchange 30%
Cipher Strength 40%
139
Final Score = (30% of Protocol Support + 30% of Key exchange + Cipher
Strength) (6.3)
Substituting the values in equation (6.3)
Final Score = (25.5 + 27 + 36)
Final Score = 88.5.
Table 6.5: Letter Grading Translation
Numerical Score Grade
> = 80 A
> = 65 B
> = 50 C
> = 35 D
> = 20 E
< 20 F
The scores achieved by the SSL implemented at SCSM server are 85%, 90%
and 90% in the categories of protocol support, key exchange and cipher support,
respectively. The final evaluation grade will be achieved using the following
computation process:
The obtained final score is translated to grade “A” according to the statistics
provided in Table 6.5, and the result is also shown in Figure 6.7. The resultant grade
is the highest grade for a well-configured and strongly implemented SSL. This
ensures that using the SCSM server, the entire communication channel will be
protected from compromise against MITM attacks.
140
Figure 6.7: SSL Evaluation Results
6.3.2 Mathematical Evaluation
SCSM implementation is based on RSA partial homomorphic cryptography
that is proven secure since its development (Milanov, 2009). However, the biggest
threat to RSA security is due to vulnerable and flawed implementation techniques
applied in the application development and KM process, i.e. loss or compromise of
the private key. RSA can be attacked to breach its security using mathematical
formula when the size of the modulus n of the public key is relatively small. An
adversary can factor n to determine the prime numbers p and q that can be effectively
used to generate the private key of a user. For instance let us consider a scenario
141
where Alice wants to send a message to Bob. Both of them have exchanged their
public keys for the encryption process via email. However, they have stored their
private keys safely and securely. Alice and Bob have generated the RSA pair of
public and private keys as specified in Table 6.6. Bob will exploit the public key of
Alice to determine her associated private key.
Table 6.6: Keys of Alice and Bob
RSA Key Alice Bob
Public pkA = (907, 186101) pkB = (5437, 189781)
Private skA = (2851) skB = (49269)
When Bob gets the public key of Alice, i.e. pkA = (907, 186101), he will
attempt to identify the two prime factors of modulus n = 186101 using an automated
tool such as a web-based calculator or by manual calculation process using
mathematical algorithms such as Fermat factoring or Pollard p – 1 factorization
algorithm. Assume that Bob used a web-based tool, i.e. Prime Factorization
Calculator and easily identified the prime factors which are p = 149 and q = 1249
from the modulus n. Bob can therefore calculate the private of Alice using the
following systematic computation process:
142
Step-1: Compute ⱷ(n) = (p-1) (q-1) (6.4)
Substituting the values of p and q in equation (6.4)
ⱷ(n) = (149 - 1) (1249 - 1) = 184704
Step-2: Choose e, 1 < e < ⱷ(n), and gcd (e, ⱷ(n)) = 1 (6.5)
Substituting the value of ⱷ(n) in equation (6.5)
gcd(e, 184704) = 1
e = 907 fulfils the above property.
Step-3: To get the private key d Bob has to solve d = e ^ -1 mod ⱷ(n) (6.6)
Substituting the values of e and ⱷ (n) in equation (6.6)
907 d = 1 mod 184704
It can be solved using the extended Euclidean algorithm as follows:
Step-4: Bob gets the private key of Alice, i.e. skA = (2851).
Step-5: Terminate. (Goluch, 2011)
Since Bob maliciously obtained the private key of Alice, he can use this key
to decrypt any confidential data or records which are encrypted using the public key
of Alice. During this entire process, Alice will have no knowledge of the surfaced
activity. This concludes that, although RSA is a secure cryptographic algorithm, it
gives an opportunity to the malicious users to obtain the private key of other users if
the algorithm is implemented with vulnerabilities such as easily computable and
small values of modulus n or loss of the secret key to an adversary. However, this
143
research implemented the RSA partial homomorphic cryptography by allocating a
large value of n which is 400 bits in length and it is generated randomly. The value
of n consists of 121 decimal digits and it is extremely challenging to be factored in
two primes by considering the computation capabilities of the existing systems. As
justified by Franke et al. (2005), even using modern computers with multiple
processors and grid computing that can combine hundreds of machines together,
finding two prime factors of a 640 bit number, i.e. 193 decimal digits can take five
months using 80 processors of 2.2 GHz linked together. However, when the value of
n is 121 decimal digits, it will take the required time as computed below:
This proves that using grid computing with 80 processors linked together the
modulus n of the SCSM implemented RSA scheme will take approximately more
than three months to be factored. Hence the proposed implementation completely
secures the data and the associated private key of the user. The confidentiality of
data are also backed-up using the proposed KM approach where users are required to
periodically change their private key after each 30 day cycle for best practices, as
described in Chapter 4, so SCSM implementation does not provide an opportunity
for an adversary to breach the confidentiality of users’ data at the cloud storages.
193 digit number = 5 months
121 digit number = x
By simple cross multiplication 193 x = 605 months
x = 605 / 193 = 3.13 months.
144
6.3.3 Compliance Evaluation
Data classified as confidential due to reasons of regulatory compliance
(PCIDSS and HIPAA) or corporate secrecy must be protected against unauthorized
access as well as view. Especially, in a cloud computing environment where clients
do not trust the provider, data must be always encrypted. However, data encryption
raises a complexity of KM, and currently there are no appropriate tools to address
this shortcoming (Michael, 2011). The cryptographic keys of clients must be secured
in the same fashion as the encrypted data itself. In other words, if keys are lost
means data are lost (Rajasekar and Chris, 2010). There must be suitable security
measures applied in KM approach to ensure that cryptographic keys are generated,
stored, shared, used and destroyed on basis of confidentiality, integrity and
authenticity. In order to address this issue, SCSM is based on a secure KM approach
which is evaluated in compliance with KM recommendations provided by GFIS
(Michael, 2011). Although in cloud computing environments, practices such as KM
approach are evaluated by professional industry based auditors, however due to
unavailability of such a facility, evaluation in this research is based on a self-auditing
process but strictly considering the compliance with required recommendations. The
evaluation process and obtained auditing results are specified in Table 6.7.
Table 6.7: Key Management Compliance and Auditing
S. No GFIS Recommendations Proposed KM Approach Audit
Results
1
Keys should be generated
in a secure environment
and using suitable key
generators.
Keys are securely and safely generated by
SCSM system and provided to the requesting
users without any hidden vulnerabilities or
security leaks.
√
2 Cryptographic keys should
be used for one purpose
Each key is used only for a unique task. For
example, RSA public key is used for data
√
145
only. encryption and associated private key is used for
data decryption, whereas DSA public key is
used for data integrity verification and
associated private key is used for the VMD
generation task.
3 Keys must be distributed
securely.
Keys are distributed to the privileged users
under the security of an encrypted SSL channel. √
4
Cloud administrator must
have no access to clients’
keys.
The client’s secret keys are stored with the TTP
and no one besides the client has privilege to use
those keys.
√
5
Keys should be always
stored encrypted and
securely archived together
with redundant backups to
avoid losing a key.
The secret keys are automatically encoded in
small (.wav) sound files while uploading and
remain stored and archived with TTP securely
under the security of TPM with redundant
backups.
√
6 Keys should be changed
regularly.
Clients are recommended to change their secret
keys periodically, i.e. after each 30 day cycle. √
7
Access to KM functions
should require a separate
authentication.
Client’s secret keys can be only retrieved and
decoded by the appropriate user under the
control of strict multi-factor authentication and
authorization process using RBAC with
CRSCG.
√
8
If keys are no longer
required, should be
destroyed or deleted in a
secure manner.
Client and TTP both will destroy the keys from
primary and backup locations using DBAN
when not required anymore. The deletion
process will not be recoverable.
√
The auditing results identified in Table 6.7, clearly specify that SCSM system
is based on a KM which is effectively in compliance with secure KM
recommendations provided by GFIS. Using SCSM, users such as data owners can
protect their confidential data and verify data integrity using asymmetric
cryptography techniques without being concerned about key storage, usage and
securing tasks. As recommended by GFIS and CSA, the CSP must not have any
146
access to clients’ keys (Michael, 2011; CSA, 2011), so this research proposes the use
of TTP for storing the keys safely and securely. However, as mentioned by Ranchal
et al. (2010) and assumed by this research that TTP might be malicious as well, so
keys are encoded using sound steganography when stored with TTP and can be
decoded automatically whenever retrieved by the client. The implementation of
proposed KM in SCSM will preserve the confidentiality and integrity of clients’ data
by operating together with other components, and it will provide a great sense of
control to the clients for securing their data at third party cloud storages.
6.3.4 Security Analysis
Since in a cloud computing environment, CSP is the most un-trusted entity,
clients cannot rely on the CSP to determine if their data integrity is intact. It is a
recommended security guideline for the clients to involve TTP for data integrity
verification at cloud, since they do not possess professional knowledge and
capabilities of performing data auditing services (Marshal, 2013; Cong et al., 2010).
TTP is supposed to be a trusted and independent organization that performs data
integrity tasks without favouring either clients or CSP, this greatly eliminates the
auditing responsibility on the clients. However, involving TTP can raise concerns
for breaching clients’ data confidentiality due to flaws in data integrity verification
process (Ranchal et al., 2010). Although TTP is supposed to be authorized
professional and a certified organization, but the internal employees of TTP such as
TTPA, who is in charge of data integrity verification, might be malicious. In order to
ensure safe auditing services in cloud, TTPA must be able to audit the clients’ data
without demanding its local copy and involving TTPA for auditing process must not
bring new threats towards data confidentiality (Ling et al., 2011).
The security of SCSM data auditing process is systematically analyzed against
potential threats of a malicious TTPA using the system workflow snapshots
147
described in Chapter 5. For instance using SCSM, clients first generate VMD of
files stored at the cloud storage which is achieved using the algorithms, i.e. DSA
with SHA-1. Clients then send the VMD to TTPA as an encoded sound file due to
protection from the access of malicious parties. During the auditing process TTPA
does not require the local copy of data besides having access to the files on the cloud
to perform data integrity verification, as shown in Figure 5.8. A malicious TTPA is
also not able to breach the confidentiality of clients’ data, since the records are stored
as encrypted at the cloud storage and files can be decrypted only by the privileged
clients. Although, TTP also stores the RSA private key of the clients, but it is
encoded using sound steganography and it can be decoded only by the appropriate
client due to access control guarded by RBAC with CRSCG, so using SCSM even a
malicious employee of TTP cannot threat or violate the clients’ data. TTPA only
performs effective auditing services to ensure that clients’ data on cloud storage
always remains intact with consistent integrity.
6.3.5 Survey Based Evaluation
Since SCSM is based on five components, from which three of them were
evaluated and their results are analyzed in Sections 6.3.1, 6.3.2, and 6.3.3
respectively, whereas the next part of TTP services, i.e. data auditing process was
evaluated and its results were discussed in Section 6.3.4. However, the two
remaining components of SCSM (SLA, Multi-factor authentication and authorization
process), and the aggregated SCSM process were evaluated using the survey. The
survey structure, data analysis and evaluation results for the mentioned two
components are described in this section whereas evaluation results of the aggregated
SCSM process are described in Section 6.4. The problem area of research was also
queried in the survey. However, the response obtained for that question is analyzed
in Chapter 1.
148
6.3.5.1 Structure of Survey
The survey was designed and analyzed using a web-based tool known as
www.surveymonkey.net. The aim of the survey was to collect feedback from cloud
computing and information security experts as well as professional across the globe.
This research selected the required sample of participants through
www.linkedin.com, which enabled us to search appropriate as well as authentic
respondents from academia as well as industry professionals who are highly suitable
to participate in this survey. LinkedIn was chosen as a medium for selecting and
inviting the professionals because it is considered as the most trusted global network
used by IT professionals, decision makers and mass affluent (Friedman and Savio,
2013; Forrester, 2012). However, participants were invited to join the survey after
carefully analyzing their profiles, recommendations from colleagues, background
records, endorsements, updates and activities, to ensure validity of their displayed
information in order to avoid spammers. It was genuinely verified that invited
participants have in-depth knowledge and expertise in the domain of cloud security.
The survey invitation was sent to a total of 90 experts, the selected number
might sounds low, which is due to less number of professional in the area of cloud
security since it is quite new and emerging research and development field.
However, from the sample of 90, only 34 responded, the response rate was low due
to certain factors, which mainly include lack of direct relation or awareness with
respondents, unwillingness to provide their personal information such as name and
email address considering their organization policies. From the received responses,
four were incomplete, so were filtered out to ensure effective data analysis. The final
results are based on total 30 responses. The job roles of participants mainly include
information security analysts, data auditors, cloud computing researchers,
developers, architects and security specialists. As shown in Table 6.8, fourteen
experts from the well-known industries such as Amazon, Google, HP, IBM,
Microstrategy, and Microsoft also participated in the survey and provided their
valuable feedback.
149
Table 6.8: Participation of the Industry Experts in Survey
Industry Number of Respondents
Amazon 3
Google 1
HP 3
IBM 4
Microsoft 2
Microstrategy 1
Total 14
The survey was conducted following research ethics by providing the
participants complete information about objectives of the study, and preserving their
data confidentiality since their provided information is used anonymously for data
analysis. The survey was based on one open and four close ended questions. Only
the first question was open ended as it was tailored to collect basic information about
the respondent, i.e. name, and email address. In order to respect privacy of the
participant, answering this question was optional. Each question was informative,
descriptive, and designed for a unique purpose. For example, the objective of
including the second question was to determine the real impact of data
confidentiality and integrity problems on cloud storage services adoption, whereas
the third question was asked to evaluate the overall process of SCSM in order to
identify its acceptance rate and strengths to achieve its intended aim. Fourth
question was mentioned to evaluate the security and strengths of the proposed multi-
factor authentication and authorization process. The final fifth question was included
to evaluate the applicability as well as acceptance of the tailored SLA. For each
questions except the first one, the participants were required to provide their response
by considering the feedback provided under the question. The response scale was
based on three options, i.e. Agree, Neutral and Disagree.
150
6.3.5.2 Survey Analysis for Multi-factor Authentication and
Authorization Process
SCSM is guarded with multi-factor authentication and authorization process
which serves as an additional layer of security for accessing the system apart from
traditional username and password authentication. The strength of SCSM is that it
prohibits invalid access to system operations such as encryption, decryption and
other tasks if the username and password of a privileged user are compromised to an
adversary since performing each task is controlled using CRSCG with RBAC, as
described in Chapter 4. In order to evaluate the strengths of discussed process,
following question was mentioned in the survey to get feedback from industry and
academia experts.
Question: Using the developed system, after the initial successful login by
entering the username and password, a controlled environment which is governed
using RBAC will be created. It will allow a user to perform all those operations
which are granted to his/her role. In order to perform an operation, user will request
the cloud server and CRSCG component will generate a random 12 characters secret
code made from a set of upper and lower case alphabets, special symbols, numbers
and all other characters on a standard keyboard. This secret code will be sent to the
requesting user via HTTPS connection as an email alert to perform desired
operation(s). Secret code can be used for the entire session or until the user requests
a fresh one.
Feedback: The proposed multi-factor authentication and authorization
process is secure and it will provide an additional layer of security for the system,
since an illegal authority cannot perform any privileged operation even if username
and password of a user are compromised.
151
The overall response gathered for the evaluation of proposed multi-factor
authentication and authorization process is mentioned in Figure 6.8 and Table 6.9.
Figure 6.8: Results for Multi-factor Authentication and Authorization Process
Table 6.9: Analysis of Multi-factor Authentication and Authorization Process
Answer Choices Response Rate Academia Industry Total
Agree 80% 14 10 24
Neutral 13.33% 1 3 4
Disagree 6.67% 1 1 2
152
The survey analysis shows 80% of the respondents agreed that proposed
multi-factor authentication and authorization process is secure and it will provide an
additional layer of security to SCSM system. However, 13.33% of the respondents
were neutral to the proposed idea which might be due to fact that such a solution is
new for a cloud computing environment, so they may not be sure about its suitability
and applicability. The rate of response as disagree is relatively very lower as
compared to agreed rate, which proves that this process will enhance the protection
level for preserving the confidentiality of users’ data while using cloud storages. For
instance the incident which occurred in Dropbox system where users will able to
access their accounts without using password due to vulnerable code updates
(Hunsinger and Corley, 2013). Under such circumstances malicious users can violate
the confidentiality of other users’ data just by knowing their usernames. However, in
similar scenarios SCSM will preserve the data confidentiality and integrity due to the
successful implementation of proposed multi-factor authentication and authorization
process.
6.3.5.3 Survey Analysis for Service Level Agreement
Despite guaranteeing the confidentiality and integrity of data, the delivery of
trusted services which are required by clients’ organization is a vital task and it can
be achieved only by enforcing the required terms, conditions, service, and security
levels, in an effective SLA. Unfortunately, existing SLAs provided by the CSPs are
not effective and flexible enough to accommodate the security and privacy
requirements of organizations dealing with confidential data to adopt cloud storage
services (Asha, 2012). In order to address the mentioned issue, this research tailored
an SLA which was evaluated by following question in the survey to determine its
acceptance rate and applicability. However the entire SLA is very lengthy and not
able to be adjusted as a survey question, so only some significant parts of the SLA
were used for the evaluation process mainly those which are missing in the existing
SLAs related to security and privacy of clients’ data.
153
Question: Beside the technical issues, the limitation of having an effective
SLA is also a major concern for organizations not to use cloud storage services. This
research designed an SLA which focuses on including security requirements of
organizations requiring consistent data confidentiality and integrity. Key elements of
the proposed SLA are defined as follows:
CSP will provide physical and logical security to avoid illegal access,
theft or tampering of data from outsider or malicious insider attacks, they
should also allow third party security audits to validate their security
controls and standards used to protect client’s data on the cloud.
Data will be encrypted with public key cryptography techniques and
located within permitted backup zones. Decryption keys will be provided
only to client’s legal admin. Client’s data will not be revealed to any
unauthorized party not even the CSP while at storage or during any
operation.
CSP will facilitate efficient KM services which include secure generation,
use, safe storage and destruction of keys. TTP will store client’s keys in a
secure manner and provide to the client whenever requested.
Conducting audit service, generating auditing reports and storing client’s
keys with redundant backups and safety will be the responsibilities of
TTP. Auditing reports will be shared with client and CSP.
If client’s data are violated from outsider or malicious insider, CSP will
immediately report to the client. The penalty will be imposed on CSP by
considering the data violation, which can be in the form of cash amount
or free service as settled by client and CSP according to sensitivity of data
and the impact or nature of violation. If client found to be malicious, CSP
may stop service with client and impose cash penalties. This research
154
does not assume any fixed penalty because it may vary for different
organization types such as healthcare, education or banking.
The client must be permitted to exit from CSP when the guarantees can’t
be met at numerous times. When an organization exits, all their data must
be provided to them with maintained confidentiality and integrity. The
CSP must also remove their data on storage disks and associated backup
devices at each location.
Feedback: When above SLA elements are merged with actual SLAs offered
by well-known CSPs, it will enhance the clients’ level of trust for using remotely
located cloud storage services to store their confidential data.
Figure 6.9: Results for SLA
155
Table 6.10: Analysis of Service Level Agreement
Answer Choices Response Rate Academia Industry Total
Agree 86.67% 14 12 26
Neutral 13.33% 2 2 4
Disagree 0% 0 0 0
The survey analysis specified in Figure 6.9 and Table 6.10 shows that 86.67%
of respondents agreed on the proposed SLA, no respondent disagreed and 13.33%
responded as neutral. However, the high rate of agree as response proved that there
is significant requirement of an SLA which should address the security and privacy
policies of organizations dealing with confidential data. The existing SLAs provided
by CSPs are static for each client whether it is an ordinary user or an organization
such as healthcare or banking industry dealing with sensitive records which are
required to follow strict security standards and compliance. The cloud professionals
from Amazon, IBM and HP also satisfied with the key points of the proposed SLA
and they agreed with the feedback that having such an SLA will enhance the clients’
trust for adopting remotely located third party cloud storage services. Considering
the obtained evaluation results, it is clear that users of SCSM will not be concerned
about data confidentiality, integrity and trust issues when their data are stored at the
cloud, this may lead to increase in the adoption of cloud storage services.
156
6.4 Evaluation of SCSM using Survey and Skipfish
The design and development of SCSM is based on five isolated components
combined together to work as a system. Each component was evaluated and their
results were analyzed in Section 6.3. This section describes the evaluation of final
SCSM process and the obtained results. SCSM was evaluated using two methods,
i.e. survey and skipfish security scanner. The workflow of SCSM and security
mechanisms used were evaluated using the third question of the same survey which
was described in Section 6.3.5, to analyze its security strengths and significance for
the cloud computing industry. The developed prototype was evaluated using skipfish
to identify and analyze the hidden vulnerabilities of SCSM domain
www.utmcloudstorage.com. SCSM was evaluated using the following question in
the survey which is based on detailed description of SCSM functionalities including
the developed security processes and the required feedback.
Question: In order to overcome the information confidentiality and integrity
concerns for organizations dealing with confidential data to use cloud storage
services and to ensure delivery of trusted cloud storage services to the clients, the
undertaken research designed, developed and deployed a secure cloud storage model
consisting of the following processes:
Each data cell is encrypted and decrypted by the client directly from the
upload and download stream using RSA partial homomorphic
cryptography. The client is able to perform operations (insertion,
updating, deletion, and limited computations) on data while it remains
encrypted at the cloud storage.
The client generates VMD using SHA-1 and sends it to the TTP as an
encoded sound file which can only be decoded by the TTP to perform
auditing services. Considering KM best practices, client sends secret
decryption keys to TTP for secure storage. Keys are automatically
157
encoded in small sound files while uploading. Only the client is
privileged to decode the sound file containing the secret keys.
TTP performs requested auditing services. If data are violated, it will be
recovered from the backup zone by requesting cloud admin. TTP will
send safe signal to the client if final data status is intact, else actions will
take place by considering the SLA, described in Question-5.
The developed model is deployed on a cloud computing infrastructure by
implementing 256-bit SSL. Access records log file is maintained for
detecting violations or fake claims from the malicious users.
Operations are only performed by authorized and privileged users due to
implementation of multi-factor authentication and authorization process
using RBAC with CRSCG, described in Question-4.
Feedback: This model can be considered as one of valuable contributions in
the field of cloud security to enable the organizations dealing with confidential data
to acquire and use cloud storage services with trust and without data confidentiality
and integrity concerns.
Table 6.11: Analysis of SCSM
Answer Choices Response Rate Academia Industry Total
Agree 83.33% 15 10 25
Neutral 13.33% 1 3 4
Disagree 3.33% 0 1 1
158
Figure 6.10: Results for SCSM
The survey analysis specified in Table 6.11 and Figure 6.10 shows that
83.33% respondents approved the contribution of SCSM and they agreed to the
feedback that it can achieve its aim of preserving the data confidentiality, integrity
and ensuring the delivery of trusted cloud storage services to clients. The
respondents also agreed that SCSM can be considered as one of the valuable
contributions in the field of cloud security as it will benefit both CSPs and clients.
CSPs can adopt and use SCSM to offer trusted cloud storage services to the business
organizations which require consistent data confidentiality and integrity guarantees.
Alternatively, the SCSM clients will have greater degree of control on their
confidential data and they can always ensure the delivery of trusted services as
specified in their SLA. The rate of disagree is relatively low as compared to agree or
neutral which is 13.33%. The respondents may have selected neutral due to some
uncertainties such as ensuring the correct configuration of SSL certificate or security
analysis of the specified cryptographic process. However, the individual evaluation
159
results for these components proved that they are well-defined and secure to achieve
their intended objectives.
Beside the survey results, web-based prototype of SCSM was evaluated using
skipfish. It is an open source web-based security scanner provided by Google, and
written in C programming language. The aim of evaluation using skipfish is to
determine whether SCSM implementation code is vulnerable to Cross-site Scripting
(XSS), Structured Query Language (SQL) and XML injection attacks (McRee,
2010). Skipfish will identify the potential low, medium and high risk vulnerabilities
in SCSM which may lead to breach of data confidentiality, integrity or trust. In order
to test the SCSM prototype, skipfish was installed at Ubuntu OS and it was compiled
using the C compiler. The evaluation report generated for SCSM prototype when
scanning was conducted for the domain www.utmcloudstorage.com is shown in
Figure 6.11 and interactive detailed evaluation report is shown in Figure 6.12.
Figure 6.11: Skipfish Security Scanning Report
160
Figure 6.12: Skipfish Interactive Report
Skipfish report identified two vulnerabilities and five warning notes in SCSM
prototype. The identified vulnerabilities are analyzed to determine if they can cause
serious threats to system operations. As shown in Figure 6.11, the first vulnerability
is of high and the second is of low risk. These vulnerabilities are specified by
skipfish as XSS flaws since there are two links identified in SCSM system, one is
about JavaScript and the second is an image URL targeting to an external system.
However, these links are inserted in SCSM code after careful review and their
function is only to provide information about the installed SSL certificate and its
validity to system users. The five warning notes are identified due to several reasons
such as configuration of VPS and communication properties, i.e. enabling or
disabling cookies. However, these warnings do not impact the security of SCSM
server and it is safe from potential vulnerabilities that may lead to breach of system
security.
161
6.5 Benchmarking of SCSM with Industry and Academia Best
Practices
Considering the aim of undertaken research to develop an improved as well as
enhanced data confidentiality and integrity preserved secure cloud storage model, in
this section, we performed benchmarking of SCSM and its all components with the
best practices of industry and academia implemented cloud storage models. When
benchmarking with these models was performed, it was identified that although S3
and GCS incorporate the use of advanced security and privacy preserving methods
such as cryptography, data auditing services, KM approach, SLA, multi-factor
authentication and authorization process, and SSL to preserve data confidentiality
and integrity at cloud storages, but some of these components have vulnerabilities or
shortcomings. For example the KM approach in these storage services is not
trustable because keys are managed by the CSPs, using such approaches government
officials such as NSA can obtain clients’ keys from the CSP to decrypt and illegally
access or view their sensitive records (Ferreira, 2013). The cryptography algorithms
used by the S3 as well as GCS have limitations in terms of functionality and usage.
When data is encrypted at the GCS or S3, it cannot be processed to perform
computations without decrypting it each time (Murali et al., 2013). These cloud
storage services also do not include security and privacy guarantees in their SLAs
(Asha, 2012), this creates a significant barrier of trust between the clients and the
CSPs especially when clients are seeking to store mission critical data. The data
auditing services using these cloud storage solutions are automatically performed on
all uploads and downloads. Since CSPs are semi-trusted entities, they may hide the
data corruptions caused by server hacks or byzantine failures to maintain their
reputation (Cong et al., 2013). Therefore, these data auditing approaches are not
trustworthy. In order to guarantee authorized access to the critical resources and
data, S3 and GCS incorporate strict multi-factor authentication and authorization
process but it can be further extended to strengthen its security. When it comes to
data transfer process, these CSPs have configured their servers to use latest 256-bit
SSL which is currently the best available industry standard. These limitations of the
industry implemented cloud storage services are described with greater details in
Section 2.8 of Chapter 2.
162
Apart from the industry based contributions, researchers from academia also
developed data confidentiality and integrity preserved secure cloud storage models.
These contributions suggested improvements in the industry implemented cloud
storage solutions, but there were shortcomings in these models as described in
Section 2.10 of Chapter 2. The Table 6.12 illustrates the summarized data after
comparison of SCSM with the industry and academia implemented cloud storage
models. The benchmarking results proved that there are significant advantages of
using SCSM as compared to the contributions proposed by academia as well as
industry and SCSM overcomes the limitations of these contributions. SCMS is a
pure cloud computing solution which is ubiquitously and pervasively accessible, and
it does not rely on maintaining a personalized proxy server. Using SCSM, clients are
only engaged with a single CSP that enables them to sustain simplified SLAs and
data owners have sufficient degree of control over their data since CA is privileged
to perform data encryption, decryption and processing as well as VMD generation
tasks. However, performing these tasks does not require in-depth and technical
knowledge of cryptography, security or cloud computing. An intermediate
knowledge based IT admin can handle these operations efficiently. Unlike other
cloud solutions, using SCSM, the CA is not required to encrypt the data before
sending it for the cloud storage, data will be encrypted automatically in real-time
directly from the upload stream while it is being uploaded but before it arrives to the
cloud storage. When data arrives at the cloud storage it will be stored completely
encrypted. Similarly for the decryption process, data are not decrypted at the CSPs’
site and then transferred to the client as this could raise the concerns of breaching the
data confidentiality. Data will depart from cloud storage as encrypted and it will be
decrypted in real-time from the download stream. The contents will be downloaded
or viewed once arrived at the CA’s site. With the use of RBAC and CRSCG, clients
can ensure that only authorized users are able to perform tasks as per their privileges,
this will provide a sense of satisfaction to the client. For performing any transaction,
CA is not required to decrypt the records due to the implementation of partial
homomorphic cryptography so it saves time and bandwidth cost (Cong et al., 2012).
In order to provide clients, full advantage of acquiring cloud storage service, using
SCSM, CA will store keys and VMD with TTP instead of their local storages.
However, these parameters are encoded in sound files which can be decrypted only
by the privileged users, so even a malicious TTP cannot access the significant
163
parameters of the client. All the parameters such as data, keys and VMD are also
secured using strong 256-bit SSL during the communication process. TTP acts an
independent as well as trusted authority to perform the services as specified in SLA
without favoring either client or the CSP. SCSM also creates fairness in the delivery
of services, where malicious users, i.e. the CA or CSPA are easily detected and
penalized, if they violate the settled SLA. For integrity checks, TTPA will initiate an
efficient auditing process, where it is also easy to determine the location of violated
data from a large file which enables an efficient data recovery process. During the
recovery process, CSPA will view the auditing reports to determine the actual data
that needs to be recovered instead of recovering the complete file.
Cryptography
Support
Data
Auditing
Service
Key
Management
Approach
Service
Level
Agreement
Authentication
and
Authorization
Process
Secure
Socket
Layer
Amazon
S3
AES 256-bit CSP
dependent.
(Untrustwort
hy)
Clients or
Amazon
manage the
keys.
No security
or privacy
guarantees
in the SLA.
(Username,
Password),
access control
policy, and 6
digit numeric
token.
256-bit
Cloud
Storage
AES 128-bit CSP
dependent.
(Untrustwort
hy)
Clients or
manage the
keys.
No security
or privacy
guarantees
in the SLA.
(Username,
Password),
access control
policy, and 6
digit numeric
token.
256-bit
Nepal et
al.
(2011)
Symmetric
Cryptography
(Algorithm not
mentioned by
the authors).
Data auditing
is performed
by the IMS
provider.
Keys are
stored with
KM provider
(Vulnerable
Approach).
X X X
Table 6.12: SCSM Benchmarking with Industry and Academia Implemented Solutions
164
Besides the comparison of SCSM with industry and academia developed
solutions, this research further performed benchmarking of each SCSM component
against the industry implemented best practices. It should be noted that since SSL is
an industry standard, we haven’t performed benchmarking of SSL configured for
SCSM with industry best practices. However, SCSM is based on using the best
available practice for transferring data through an encrypting channel using the latest
256-bit SSL. The benchmarking results and novelty of the SCMS components are
described with details in the following sub-sections.
Nirmala
et al.
(2013)
AES (bit size
not mentioned
by the authors).
Data auditing
is performed
by the
clients.
X X X X
Puttasw
amy et
al.
(2011)
Symmetric
Cryptography
(Algorithm not
mentioned by
the authors).
X Keys are
stored at
internal server
(Vulnerable
Approach).
X Traditional
username and
password.
X
Seiger et
al.
(2011)
AES (bit size
not mentioned
by the authors).
Data auditing
is performed
by the proxy
server.
X X X X
Varalaks
hmi and
Deventhi
ran
(2012)
4-bit key size
encryption
algorithm.
Data auditing
is performed
by the
broker.
X X X X
Secure
Cloud
Storage
Model
Partial
homomorphic
RSA 200-bit
Data auditing
is
responsibility
of TTP. It is
Trusted and
Secure.
TTP manages
the keys
securely and
safely.
SLA
guarantees
security and
privacy
requirement
s.
(Username,
Password), and
12 characters
token consists
of
alphanumeric
as well as
special
characters.
256-bit
165
6.5.1 Secure and Flexible Partial Homomorphic Cryptography
While analyzing the best practices used by the cloud industries to preserving
data confidentiality, in this research we identified that S3 and GCS are supporting
client as well as server side encryption techniques by using AES 256 and 128 bit
respectively. Although AES-256 is proven secure to protect mission data, but AES-
128 is not considered as strong enough to protect sensitive records at third party
cloud storages because it can be cracked by powerful technology in a reasonable
processing time (Ferreira, 2013). Beside this concern, the encryption process of S3
as well as GCS has limitations in terms of functionality and usage (Murali et al.,
2013). The RSA partial homomorphic cryptography implemented in this research
surpasses the benchmark of current best cryptography practices used in industry
implemented cloud storage solutions. The developed cryptography algorithm is
strong and proven secure by mathematical evaluation process as discussed in Section
6.3.2. The users of SCSM can also perform computations on their encrypted records
without each time decrypting the data. Besides, the security strengths, the
performance of SCSM encryption and decryption processes were also analyzed
experimentally. Prerna and Abhishek, (2013) analyzed the performance of RSA
cryptography to determine the average time required to encrypt and decrypt the data.
While comparing with these results, it was identified that SCSM requires less time to
encrypt and decrypt data files stored at the cloud storage. The performance graphs
for data encryption and decryption processes are shown in Figures 6.13 and 6.14
respectively.
166
Figure 6.13: Performance Analysis of Encryption Process
Figure 6.14: Performance Analysis of Decryption Process
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
153 196 312 868
Encr
ypti
on
Tim
e (m
illis
eco
nd
s)
File Size (Kilobytes)
Prerna and Abhishek,2013
SCSM
0
1000
2000
3000
4000
5000
6000
7000
153 196 312 868
Dec
ryp
tio
n T
ime
(mill
ise
con
ds)
File Size (Kilobytes)
Prerna and Abhishek,2013
SCSM
167
During the experiment, four different files of varying sizes were provided to
the SCSM system for performing encryption as well as decryption tasks. The time
required to perform these operations was measured in milliseconds. The results
represent that computation time gradually increases with the increased size of data
file, but still in each case SCSM takes less time as compared to the average time
mentioned by Prerna and Abhishek, (2013) when using RSA. Therefore, the
cryptography tasks performed using SCSM will not have computational overhead,
and using this process clients can ensure that their data confidentiality will always
remain preserved at third party cloud storages since no any unprivileged authority
can view or access their sensitive records when data are encrypted and backed up by
the secure KM approach of SCSM.
6.5.2 Security and Privacy Guaranteeing Service Level Agreement
In order to analyze the best practices of cloud industry for SLAs, in this
research we reviewed the SLAs of S3 and GCS. It was identified that the major
components of these SLAs are focused on service availability, uptime, downtime,
credit request, payment procedures, and error rate. These SLAs are also fixed, non-
negotiable and do not provide data privacy as well as security guarantees, therefore,
they are unable to satisfy the security as well as privacy requirements of
organizations dealing with mission critical data to adopt cloud storage services
(Asha, 2012).
In order to surprise the current industry benchmark for constructing cloud
SLAs. In this research we designed an effective SLA as a component of SCSM,
which is based on core security and privacy elements. This SLA is designed to
achieve the aim of assisting the organizations dealing with highly sensitive data to
adopt cloud storage services without confidentiality and integrity concerns. The
major components of this SLA are focused on data ownership, protection and
168
control, regulatory compliance, roles and responsibilities of the involved users, data
security auditing and controls, privacy guarantees, indemnification, disaster recovery
and exit clause. Considering the research scope the research scope, the proposed
SLA is only focused to involve data security and privacy requirements of an
organization, other requirements such as service availability and processing
capabilities are not covered by this SLA. However, this research assumed that these
requirements will exist in the final SLA according to CSPs service capabilities and
commitments.
6.5.3 Trusted, Secure and Efficient Data Auditing Service
The data integrity verification practices used in cloud industry implemented
solutions are not trusted by the clients because CSPs themselves are in charge of
conducting auditing services, so they might hide their mistakes in order to protect
their reputation (Ling et al., 2011). This research surpasses the industry practices of
data auditing by developing an efficient, trusted as well as secure approach for
conducting data auditing services. As proved in the evaluation process, the
developed integrity verification process is secure since it does not bring new
vulnerabilities to the system and it is trusted because auditing is performed by
independent third party auditor who performs the required tasks as specified in the
SLA without favoring any involved party. Besides the security strengths, it was also
necessary to analyze the performance of SCSM data auditing approach to determine
its efficiency in order to ensure that it does not cause computational overhead. The
performance of SCSM data auditing approach was compared the recent results
published by Roshan et al., (2014) who used similar techniques to verify the integrity
of files on cloud. The generated results are illustrated by the graph as shown in the
Figure 6.15.
169
Figure 6.15: Performance Analysis of Data Integrity Verification Process
The experiment was conducted by providing eight different files of varying
sizes from 2 to 9 kilobytes to the SCSM system. It was proved that SCSM consumed
less time to verify the integrity of each file as compared to the results of Roshan et
al., (2014). The time required to verify data integrity gradually rises with the
increasing size of file but it does not causes a computational overhead or it does not
affects the overall performance of the system. The data auditing service performed
using SCSM can be effectively used by the TTP to verify that clients’ data always
remains intact at the third party cloud storages. This process will ensure that
sensitive records of clients are protected against tampering, modification and deletion
from illegal entities such as malicious insiders and outsiders.
0
1000
2000
3000
4000
5000
6000
7000
8000
2 3 4 5 6 7 8 9
Au
dit
ing
Tim
e (m
illis
eco
nd
s)
File Size (Kilobytes)
Roshan et al., 2014
SCSM
170
6.5.4 Trusted and Secure Key Management Approach
In this research, the best practices of cloud industry for key management
approach were reviewed, it was identified that cloud storage services such as S3 and
GCS are based on server side maintenance of KM approaches where the secret keys
of the clients are stored with the CSPs by protecting them using software and
hardware based security approaches. However, this practice has shortcoming
especially in terms of illegal accesses from malicious insiders such as disgruntled
employees. The best practice used by information security industry for KM
approach is to develop a key escrow system with strong hardware security
mechanisms but this kind of practice is not applicable in a cloud computing
environment because keys escrow systems can be accessible by the government
authorities such as NSA to decrypt clients’ data. The clients dealing with
confidential data such as healthcare sector do not trust cloud storage services due to
such an untrusted KM approach as it violates their compliance to HIPAA (Ferreira,
2013).
The KM approach developed in this research surpasses the current industry
best practices. This research developed a trusted as well as secure key escrow
system by considering the data security concerns in a cloud computing environment.
The developed key escrow is secured using hardware security such as TPM. Using
SCSM, clients will trust the KM approach because no any authority such as third
party government officials or CSPs can access the private keys of the clients. Keys
remains stored with a TTP to avoid lost or theft situations, the keys are encoded
using sound steganography techniques which requires about 3 to 4 seconds for key
encoding and decoding process, therefore no one is privileged to decrypt and
generate the secret keys because access to decode key from sound is only granted to
the data owner, and its security is guarded by multi-factor authentication and
authorization process using RBAC with CRSCG. The key management approach
used in SCSM is proven effective during the research evaluation process as discussed
in Section 6.3.3 as it protects the keys from lost, theft or illegal access from the
generation until destruction process.
171
6.5.5 Extremely Secure Multi-factor Authentication and
Authorization Process
The cloud industry best practices for authentication and authorization includes
the use of two-factor authentication process, where users are initially authenticated
by validating their username and password, secondly they are required to provide a
random security token in order to get access to personal resources such as
applications and data. The security token can be generated by the user using an
isolated channel such as token generator device or a smart phone application. In
order to ensure secure access to mission critical data, the privileges of users are also
controlled using RBAC, so they are able to perform only those operations which are
granted to their roles (Amazon, 2011; Google, 2012a). Although this process is
secure, but considering the concerns of the clients dealing with mission critical data
and capabilities of powerful hardware systems used to crack the security codes and
passwords, in this research, we enhanced the security of multi-factor authentication
and authorization process by extending the traditional implementation of RBAC and
by designing a token generator algorithm i.e. CRSCG that generates a complex 12
characters code consist of number, alphabets and special symbols.
Using SCSM, the access of each user is controlled at the task level, in other
words users are required to prove their authenticity prior to performing each task,
unlike other RBAC implemented systems where users can perform all the privileged
tasks once they are authenticated. This process will avoid the possible impact of
misusing the users’ data and resources during session hijacking attacks since
performing each task requires a valid security token. The token required by the user
to perform any operation is not guessable due to its complexity and randomization.
The current industry practices are based on 6 digit tokens (Amazon, 2011; Google,
2012a), but in this research, the CRSCG generates a 12 characters complex token and
delivers it to the user through the available facility which can be a token generator
device, message or a smart phone application.
172
The experimental results proven that security token is unrealistic to be cracked
using brute force and dictionary attacks even by the devices with powerful
processing capabilities such as fast parallel GPUs, standard desktop systems, and
botnets as shown in Figure 6.16. Since the token is only valid for a single login
session, it will be impractical to be cracked in such a limited time with existing
processing capabilities.
Figure 6.16: Security Experiment on CRSCG
173
With the development of extended RBAC and CRSCG, SCSM ensures that
only authorized users are interacting with the system and they are performing their
privileged tasks. This will provide a sense of trust, confidence and satisfaction to the
clients for using third party cloud storage services to store their mission critical data.
The secured and authorized access will preserve clients’ data against the
confidentiality and integrity breaching threats launched from malicious insiders as
well as outsiders. Although, the existing cloud industry authentication and
authorization processes are secure, but this research further strengthened their
security capabilities to guarantee secure access to mission critical data.
6.6 Summary
The research aim of preserving data confidentiality and integrity as well
ensuring the delivery of trusted cloud storage services is achieved by designing and
developing the SCSM. However, the strength of SCSM was highly dependents on its
components which must be effective to achieve their target. For example the
implemented cryptography technique should be secure, the SLA must gain clients’
trust and SSL certificate must be properly configured at the server. Authentication
and authorization process must ensure secure access to the system. TTP services
such as KM approach should be secure, and data auditing service must be protected
and safe. The proposed five components and the entire SCSM process as well as the
system were evaluated using tools, techniques and methods. The evaluation results
proved that all components of SCSM are successful to achieve their objectives.
Alternatively, SCSM is successful to achieve its intended goal.
174
CHAPTER 7
CONCLUSION AND FUTURE WORK
7.1 Introduction
Business organizations can leverage significant cost-effective advantages of
using third party remotely located cloud storage services such as ubiquitous
accessibility, resilient availability, unlimited storage capacity, backup and disaster
recovery (Paul and Shanmugapriyaa, 2012). However, due to security limitations of
cloud storage services, CSPs are not able to achieve clients’ trust. Cloud storage is a
very attractive offering for small organizations which do not have strict data security
and privacy requirements whereas organizations dealing with confidential data are
reluctant to adopt this service due to emerging data integrity and confidentiality
concerns (Syam and Subramanian, 2011; Gansen et al., 2010). The implementations
of ineffective security techniques and improper SLAs have created a barrier of trust
among cloud storage providers and the adopting organizations (Asha, 2012).
Researchers have recently attempted to overcome this shortcoming by
designing as well as developing security models to be applied in cloud storage
services, however these contributions require improvements and enhancements prior
to be widely accepted by the CSPs to offer trusted, integrity as well confidentiality
preserved cloud storage services to the clients. This thesis presents the contribution
made by the research to overcome the stated research problem. The final formed
contribution can bring up significant advantages to cloud industry. However, there
175
are certain requirements which are not covered by the scope of this research, these
need to be addressed in the future direction of this research. The remainder of this
chapter is categorized into four sections. Section 7.2, describes the contributions and
significance of the research. Section 7.3, describes the potential real-world
applications of SCSM. Section 7.4, describes the limitations and future direction of
research. Section 7.5, concludes the summary of this chapter.
7.2 Contributions and Significance
The research conducted as the part of this thesis significantly advanced the
cloud storage security. Most prior contributions have been inappropriate to address a
complete security-based solution which incorporates the use of an effective SLA,
secure and dynamic cryptography operations, resilient access control mechanism,
256-bit SSL encryption, and improved TTP services, to develop a model that can be
used for offering trusted confidentiality and integrity preserved cloud storage
services. This research enhances the security of cloud storage services. The
technical contributions obtained during the research are mentioned as follows:
i. This research investigated the adoption of cloud storage services and their
vulnerabilities in perspective of data confidentiality, integrity and trust.
Specifically, the cloud storage security solutions provided by Amazon,
Google, Seiger et al. (2011), Nirmala et al. (2013), Nepal et al. (2011),
Puttaswamy et al. (2011) and Varalakshmi and Deventhiran (2012) were
reviewed as well as analyzed to determine their strengths for developing a
secure cloud storage approach.
176
ii. During the session of literature review, this research analyzed the related
work contributions and used the resultant information to justify that
existing cloud storage security models incorporate limitations which need
to be addressed by developing an improved and enhanced model.
iii. Considering the information obtained from the literature review, this
research designed SCSM using various security models to preserve data
confidentiality and integrity while acquiring and using third party
remotely located cloud storage services and to ensure the delivery of
trusted services to the clients.
iv. This research developed a web-based prototype for SCSM to illustrate its
functional behaviour, i.e. the system usability, and to evaluate the security
strengths of its components as well as the entire model itself. The
developed prototype was successfully configured as well as deployed at
the domain www.utmcloudstorage.com.
v. The acceptance and security of the developed model was verified using
various evaluation techniques. The achieved results were analyzed which
proved that SCSM can be successfully used by CSPs for preserving data
confidentiality and integrity at cloud storages and for ensuring the
delivery of trusted cloud storage services to the clients.
The contributions made by this research and list of publications as well as the
cloud computing and information security certificates obtained at each stage of the
study are summarized in Figure 7.1.
177
Figure 7.1: Contributions, Publications and Certificates
Publications
and
Certificates
Start
Initial Investigation
Understanding Cloud
Computing Concepts
Further Investigation
Exploring the World of Cloud
Security
(Brohi and Bamiah, 2011;
2011a)
(Bamiah and Brohi, 2011)
IBM Certified Cloud Solution
Architect
IBM Certified Cloud Solution
Advisor
Rackspace Certified CloudU
Health Informatics in Cloud
(Online course from Georgia
Institute of Technology, USA)
Publications
and
Certificates
(Brohi, 2011)
(Bamiah and Brohi, 2011a)
(Brohi et al., 2012; 2012a)
(Bamiah et al., 2012; 2012a; 2012b)
(Bamiah et al., 2013)
CSA Certified Cloud Computing
Security Knowledge
EC-Council Certified Ethical
Hacker
Literature Review
Contributions
Amazon S3
Google Cloud Storage
(Seiger et al., 2011)
(Nirmala et al., 2013)
(Nepal et al., 2011)
(Varalakshmi and
Deventhiran, 2012)
(Puttaswamy et al., 2011)
Design, Development and
Evaluation of SCSM
Impact Factor
Publication
(Brohi et al., 2013)
End
178
7.3 Potential Applications of SCSM
Cloud storage services have recently gained enormous popularity in SMBs,
however the large enterprises are not keen to rely on an external vendor for storing
their confidential data, instead they are working on building personalized cloud
storages. These private cloud solutions do not possess economies of scale, so an
organization must rely on external cloud storages to leverage cost-effective and
scalable storage benefits (Vahid et al., 2012). SCSM is designed to accommodate
the security requirements of large enterprises to store their confidential data at cloud
storages with trust and without data confidentiality as well as integrity concerns.
On successful development and deployment of SCSM, it can be used by CSPs
to offer trusted cloud storage services to business sectors such as banking, healthcare,
education and PCI. For instance a healthcare organization such as a hospital can
adopt SCSM to store their data including patients, doctors and staff records at cloud.
Data will be successfully stored at multiple backup zones considering the SLA, in
order to protect it during natural disasters, and storage services will be running on
durable servers to provide high availability and ubiquitous accessibility. Acquiring
remote cloud storage services will enable the adopting hospital to scale up or down
the required storage capacity anytime without physical interaction or purchasing new
software and hardware. Using SCSM, organizations can also perform limited
operations on data when it is encrypted. However, this facility will be provided
unlimited with the efficient implementation of FHE in the near future. SCSM can
also be adopted by organizations for storing their organizational data in order to
leverage cost-effective, durable, scalable and unlimited capacity cloud storage
services.
179
7.4 Limitations and Future Directions of Research
Although evaluation results proved that SCSM is successful in preserving
confidentiality and integrity of data at remote cloud storages and it also enables the
delivery of trusted services to clients considering their SLA. However, the cloud
storage security is an emerging and vast area of research from which this thesis has
covered only a part that is mentioned in the research scope. There are certain
limitations in the current research that can be addressed in the future work. This
section lists briefly four potential avenues for further extending the research area.
7.4.1 Fully Homomorphic Encryption
Due to implementation of RSA partial homomorphic cryptography, users of
SCSM can perform limited operations on their confidential data while it remains
encrypted at the cloud storage. However, in the future direction of this research,
utilization of FHE can bring significant advancement to the field of cloud storage
security. Although, Gentry (2009) implemented FHE but it is not yet proved to be
efficient for practical use in complex cloud computing environments (Kui et al.,
2012; Wang et al., 2013; Stefania et al., 2012). Considering the emerging research
on FHE, it seems that it might be fully implemented in the near future. With the
availability of this technique, SCSM development can be enhanced to allow the
clients for performing unlimited number of transactions on their data while
successfully protecting their data confidentiality, integrity and ensuring the delivery
of trusted services.
180
7.4.2 Heterogeneous Data
Although the implementation process discussed in Chapter 5 is based on
uploading one text-based document which contains employee records belonging to
an organization. However in real-life scenario, users may upload numerous files to
cloud storages containing heterogeneous data types such as text, image or videos.
For example if the adopting client is a healthcare organization, they may require to
upload images to cloud storages which contain patients’ health reports. In order to
narrow down the research scope, SCSM implementation only focuses on the use of
text-based documents. The developed prototype can be further enhanced to be
applied in dealing with direct database management systems operating at cloud
storages. It can also be further extended to support multiple types of data such as
text, image, and video.
7.4.3 Performance
Although in this research, the performance of certain SCSM components such
as data encryption, decryption and data integrity verification process was analyzed in
order to ensure that these tasks are efficient, but SCSM was designed and developed
mainly by considering the data security requirements of the organizations dealing
with highly confidential data, therefore the main focus was to formulate a solution
with improved and enhanced security capabilities. However, in a cloud computing
environment, performance of the system must also be considered as a significant
factor, so there is significant opportunity for the future work to experiment, analyze
and evaluate the performance of the SCSM in an actual cloud datacentre to avoid
potential performance overheads. There is mutual understanding between the users
involved in SCSM computing process via emails, this process must be improved as
well for better system usability. SCSM can be proved as a complete and valuable
solution for securing cloud storage services when trade-off between performance and
the security is well-managed.
181
7.4.4 Multi-user Computing Environment
The implementation of SCSM is focused on the interaction of single client
with CSP and TTP. However, in real life scenarios several clients must be able to
use cloud storage services simultaneously and interact with TTP to ensure their data
integrity and confidentiality as well as service compliance with the settled SLA. The
implementation of SCSM can be altered to involve a solution for multiple client
interaction and to enable TTP for providing concurrent data auditing as well as KM
services to all the registered clients.
7.5 Summary
Researchers across the globe are developing innovative solutions in the field
of cloud computing security. However, cloud security is vast area of research, for
instance some researchers are working on securing virtualization layer of a cloud
computing platform, whereas others are focusing on physical and application layers.
In order to narrow down the scope, this research mainly targeted the sub-offering of
IaaS, i.e. cloud storage services. SCSM was formulated as a solution to protect the
data confidentiality as well as the integrity of sensitive data at the remotely located
cloud storages, and to ensure the delivery of trusted cloud storage services to the
clients. However, the noteworthy contribution made by SCSM is only a part of the
solution which is required in using cloud storage services there is significant
opportunity for future research to enhance SCSM in various dimensions to produce a
complete and perfect solution for using trusted and secure cloud storage services.
182
REFERENCES
Achemlal, M., Gharout, S. and Gaber, C. (2011). Trusted Platform Module as an
Enabler for Security in Cloud Computing. Proceedings of 2011 International
Conference on Network and Information Systems Security (SAR-SSI), 18-21 May.
La Rochelle, 1-6.
Adebiyi, A., Johnnes, A. and Chris, I. (2012). Security Assessment of Software Design
using Neural Network, International Journal of Advanced Research in Artificial
Intelligence, 1 (4), 1 – 7. SAI Publications.
Afoulki, Z., Bousquet, A., Briffaut, J., Rouzaud, J. and Toinard, C. (2012). MAC
Protection of the Open Nebula Cloud Environment, Proceedings of 2012
International Conference on High Performance Computing and Simulation (HPCS),
2-6 July. Madrid, 85-90.
Ahmed, A. (2012). Meeting PCI DSS When Using a Cloud Service Provider. Journal
of Information Systems Audit and Control Association (JISACA), 5, 24 – 30. ISACA
Publications.
Ahmed, S. and Raja, M. (2010). Tackling Cloud Security Issues and Forensics Model.
Proceedings of 2010 High-Capacity Optical Networks and Enabling Technologies
(HONET), 19-21 December. Cairo, 190-195.
Amazon, (2011). Amazon Web Services: Overview of Security Processes. Amazon
Web Services, Inc.
Amazon, (2013). Amazon Simple Storage Service SLA. Amazon Web Services.
Retrieved July 2, 2013, from http://aws.amazon.com/s3/sla.
Amazon, (2014). Amazon Simple Storage Service: Developer Guide. API Version
2006-03-01. Amazon Web Services, Inc.
Ang, L., Yang, X., Srikanth, K. and Ming, Z. (2011). Comparing Public-Cloud
Providers. Internet Computing, 15 (2), 50 – 53. IEEE Computer Society.
Asghar, M., Russello, G., Bruno, C. and Ion, M. (2013). Supporting Complex Queries
and Access Policies for Multi-user Encrypted Databases. Proceedings of the 2013
183
ACM Workshop on Cloud Computing Security. 4-8 November. New York, 77-88.
Asha, M. (2012). Security and Privacy Issues of Cloud Computing: Solutions and
Secure Framework. International Journal of Multidisciplinary Research, 2 (4), 182
– 193. Zenith International Research and Academic Foundation.
Astrova, I., Stella, G., Marc, S., Koschel, A., Jan, B., Kellermeier, M., Stefan, N.,
Francisco, C. and Michael, H. (2012). Security of a Public Cloud. Proceedings of
2012 6th International Conference on Innovative Mobile and Internet Services in
Ubiquitous Computing (IMIS), 4-6 July. Palermo, 564-569.
Ayushi. (2010). A Symmetric Key Cryptographic Algorithm. International Journal of
Computer Applications, 1 (15), 1 – 4. Foundation of Computer Science.
Bamiah, M. and Brohi, S. (2011). Exploring the Cloud Deployment and Service
Delivery Models. International Journal of Research and Reviews in Information
Sciences (IJRRIS), 1 (3), 77 – 80. Science Academy Publisher.
Bamiah, M. and Brohi, S. (2011a). Seven Deadly Threats and Vulnerabilities in Cloud
Computing. International Journal of Advanced Engineering Sciences and
Technologies (IJAEST), 9 (1), 87 – 90. International Scientific Engineering and
Research Publications.
Bamiah, M., Brohi, S. and Chuprat, S. (2012b). Using Virtual Machine Monitors to
Overcome the Challenges of Monitoring and Managing Virtualized Cloud
Infrastructures. Proceedings of 2012 4th International Conference on Machine
Vision (ICMV 2011). 9-10 December. Singapore, 187-192.
Bamiah, M., Brohi, S., Chuprat, S. and Brohi, M. (2012a). Cloud Implementation
Security Challenges. Proceedings of 2012 International Conference on Cloud
Computing Technologies, Applications and Management (ICCCTAM). 8-10
December. Dubai, 174–178.
Bamiah, M., Brohi, S., Chuprat, S. and Jamalul-lail, A. (2012). A Study on
Significance of Adopting Cloud Computing Paradigm in Healthcare Sector.
Proceedings of 2012 International Conference on Cloud Computing Technologies,
Applications and Management (ICCCTAM). 8-10 December. Dubai, 65–68.
Bamiah, M., Brohi, S., Chuprat, S. and Jamalul-lail, A. (2013). Trusted Cloud
Computing Framework for Healthcare Sector, Journal of Computer Science, 10 (2),
240 – 250. Science Publications.
Baun, C. and Kunze, M. (2009). Building a Private Cloud with Eucalyptus.
184
Proceedings of 2009 5th IEEE International Conference on E-Science Workshops.
9-11 December. Oxford, 33-38.
Bouayad, A., Bilalat, A., Mejhed, N. and Ghazi, M. (2012). Cloud Computing:
Security Challenges. Proceedings of 2012 Colloquium in Information Science and
Technology (CIST). 22-24 October. Fez, 26-31.
Bourque, P. and Fairley, R. (2014). Guide to the Software Engineering Body of
Knowledge (SWEBOK). (Version 3.0). IEEE Computer Society. Los Alamitos, CA,
USA.
Brohi, S. (2011). A Trusted Virtual Private Space Model for Enhancing the Level of
Trust in Cloud Computing Technology. International Journal of Research and
Reviews in Information Sciences (IJRRIS), 1 (3), 74 – 76. Science Academy
Publisher.
Brohi, S. and Bamiah, M. (2011). Challenges and Benefits for Adopting the Paradigm
of Cloud Computing. International Journal of Advanced Engineering Sciences and
Technologies (IJAEST), 8 (2), 286 – 290. International Scientific Engineering and
Research Publications.
Brohi, S. and Bamiah, M. (2011a). Exploit of Open Source Hypervisors for Managing
the Virtual Machines on Cloud. International Journal of Advanced Engineering
Sciences and Technologies (IJAEST), 9 (1), 55 – 60. International Scientific
Engineering and Research Publications.
Brohi, S., Bamiah, M., Brohi, M. and Kamran, R. (2012). Identifying and Analysing
Security Threats to Virtualized Cloud Computing Infrastructures. Proceedings of
2012 International Conference on Cloud Computing Technologies, Applications and
Management (ICCCTAM). 8-10 December. Dubai, 151–155.
Brohi, S., Bamiah, M., Chuprat, S. and Jamalul-lail, A. (2012a). Towards an Efficient
and Secure Educational Platform on Cloud Infrastructure. Proceedings of 2012
International Conference on Cloud Computing Technologies, Applications and
Management (ICCCTAM). 8-10 December. Dubai, 145–150.
Brohi, S., Bamiah, M., Chuprat, S. and Jamalul-lail, A. (2013), Design and
Implementation of Privacy Preserved Off-Premises Cloud Storage. Journal of
Computer Science, 10 (2), 210 – 223. Science Publications.
Catteddu, D. and Hogben, G. (2009). Benefits, Risks and Recommendations for
Information Security. European Network of Information Security Agency (ENISA).
185
Retrieved June 25, 2013, from http://www.enisa.europa.eu/activities/ risk-
management/files/deliverables/cloud-computing-risk-assessment.
Chan, J., Nepal, S., Moreland, D., Hwang, H., Chen, S. and Zic, J. (2007). User-
Controlled Collaborations in the Context of Trust Extended Environments.
Proceedings of 2007 16th IEEE International Workshops on Enabling
Technologies: Infrastructure for Collaborative Enterprises (WETICE). 18-20 June.
Evry, 389-394.
Chirag, F., Shrikanth, V. and Trivedi, H. (2012). Cloud Security Using Authentication
and File Base Encryption. International Journal of Engineering Research and
Technology (IJERT), 1 (10), 8 – 12. Engineering and Science Research Support
Academy Publications.
Cohen, B. (2013). PaaS: New Opportunities for Cloud Application Development.
Transactions on Computers, 46 (9), 97 – 100. IEEE Computer Society.
Cong, W., Chow, S., Qiang, W., Kui, R. and Lou, W. (2013). Privacy-Preserving
Public Auditing for Secure Cloud Storage. Transactions on Computers, 62 (2), 362
– 375. IEEE Computer Society.
Cong, W., Kui, R., Lou, W. and Jin, Li. (2010). Toward Publicly Auditable Secure
Cloud Data Storage Services. Network, 24 (4), 19 – 24. IEEE Communications
Society.
Cong, W., Qiang, W., Kui, R., Ning, C. and Lou, W. (2012). Toward Secure and
Dependable Storage Services in Cloud Computing. Transactions on Services
Computing, 5 (2), 220 – 232. IEEE Computer Society.
CSA. (2011). Security Guidance for Critical Areas of Focus in Cloud Computing V-
3.0, Cloud Security Alliance (CSA). Retrieved July 5, 2013, from
https://cloudsecurityalliance.org/guidance/csaguide.v3.0.pdf.
Deyan, C. and Hong, H. (2012). Data Security and Privacy Protection Issues in Cloud
Computing. Proceedings of 2012 International Conference on Computer Science
and Electronics Engineering (ICCSEE). 23-25 March. Hangzhou, 647-651.
Dillon, T., Chen, W. and Chang, E. (2010). Cloud Computing: Issues and Challenges.
Proceedings of 2010 24th IEEE International Conference on Advanced Information
Networking and Applications (AINA). 20-23 April. Perth, 27-33.
Dongxi, L., Lee, J., Jang, J., Nepal, S. and Zic, J. (2010). A Cloud Architecture of
Virtual Trusted Platform Modules. Proceedings of 2010 8th IEEE/IFIP
186
International Conference on Embedded and Ubiquitous Computing (EUC). 11-13
December. Hong Kong, 804-811.
Duncan, A., Creese, S. and Goldsmith, M. (2012). Insider Attacks in Cloud
Computing. Proceedings of 2012 IEEE 11th International Conference on Trust,
Security and Privacy in Computing and Communications (TrustCom). 25-27 June.
Liverpool, 857-862.
eApps, (2014). Custom Virtual Server Hosting in a True Cloud Platform. Retrieved 15
August, 201, from http://www.eapps.com/cloud-solutions/virtual-machine-
hosting.php.
Eric, B. (2013). The Top Cloud Companies: Here’s What Customers Think of Them:
Retrieved July 19, 2014, from http://venturebeat.com/2013/09/09/top-cloud-
companies-amazon-google-microsoft/.
Fadadu, C., Shrikanth, V. and Trivedi, H. (2012). Cloud Security Using Authentication
and File Base Encryption. International Journal of Engineering Research and
Technology (IJERT), 1 (10), 15 – 18. Engineering and Science Research Support
Academy Publications.
Ferreira, A. (2013). Google Encrypts All Data In Cloud Storage. Retrieved July 28,
2014, from http://www.securitybistro.com/?p=7931.
Forrester. (2012). IT Purchasing Goes Social. Forrester Consulting and Research Now.
Retrieved October 23, 2013, from http://www.iab.net/media/file/IT
_Purchasing_Goes_Social-Best_Practices_Final.pdf.
Franke, J., Boehm, M., Bahar, F. and Kleinjung, T. (2005). Factoring 640-bit RSA.
Crypto World. Retrieved October 9, 2013, from http://www.crypto-
world.com/announcements/rsa640.txt.
Friedman, E. and Savio, C. (2013). Influencing the Mass Affluent: Building
Relationship on Social Media. LinkedIn Corporation. Retrieved October 23, 2013,
from http://marketing.linkedin.com/sites/default/files/attachment
/MassAffluentWhitepaper.pdf.
Gall, M., Schneider, A. and Fallenbeck, N. (2013). An Architecture for Community
Clouds Using Concepts of the Intercloud. Proceedings of 2013 IEEE 27th
International Conference on Advanced Information Networking and Applications
(AINA). 25-28 March. Barcelona, 1634-1639.
Gansen, Z., Rong, C., Jin, L., Feng, Z. and Yong, T. (2010). Trusted Data Sharing over
187
Untrusted Cloud Storage Providers. Proceedings of 2010 IEEE 2nd International
Conference on Cloud Computing Technology and Science (CloudCom). 30
November – 3 December. Indianapolis, 97-103.
Gentry, C. (2009). Fully Homomorphic Encryption using Ideal Lattices. Proceedings
of the 2009 41st Annual ACM Symposium on Theory of Computing. 30 May – 2
June. New York, 169-178.
Ghosh, N. and Ghosh, K. (2012). An Approach to Identify and Monitor SLA
Parameters for Storage-as-a-Service Cloud Delivery Model. Proceedings of 2012
IEEE Globecom Workshops (GC Wkshps). 3-7 December. Anaheim, 724-729.
Gibson, J., Rondeau, R., Eveleigh, D. and Qing, T. (2012). Benefits and Challenges of
Three Cloud Computing Service Models. Proceedings of 2012 4th International
Conference on Computational Aspects of Social Networks (CASoN). 21-23
November. Sao Carlos, 198-205.
Goluch, S. (2011). The Development of Homomorphic Cryptography from RSA to
Gentry's Privacy Homomorphism. Vienna University of Technology, Vienna.
Google, (2012). Google Cloud Storage: A Simple Way to Store, Protect, and Share
Data. Google Inc., USA.
Google, (2012a). Google’s Approach to IT Security: A Google White Paper. Google
Inc., USA.
Google, (2013). Just Develop IT Migrates Petabytes of Data to Google Cloud Storage.
Retrieved July 27, 2014, from http://googlecloudplatform.blogspot.com
/2013/11/justdevelopit-migrates-petabytes-of-data-to-google-cloud-storage.html.
Google, (2014). Google Cloud Storage: Authentication: Retrieved July 27, 2014, from
https://cloud.google.com/storage/docs/authentication.
Gupta, S., Horrow, S. and Sardana, A. (2012). IDS Based Defense for Cloud Based
Mobile Infrastructure as a Service. Proceedings of 2012 IEEE 8th World Congress
on Services (SERVICES). 24-29 June. Honolulu, 199-202.
Harris, C. (2011). IT Downtime Costs $26.5 Billion In Lost Revenue. Retrieved
August 7, 2014, from http://www.informationweek.com/it-downtime-costs-$265-
billion-in-lost-revenue/d/d-id/1097919.
Hibo, H., Jianliang, X., Chushi, R. and Byron, C. (2011). Processing Private Queries
over Untrusted Data Cloud through Privacy Homomorphism. Proceedings of 2011
27th International Conference on Data Engineering. 11-16 April. Hannover, 601-
188
61.
Hofmann, P. and Woods, D. (2010). Cloud Computing: The Limits of Public Clouds
for Business Applications. Internet Computing, 14 (6), 90 – 93. IEEE Computer
Society.
Huaqun, W. (2013). Proxy Provable Data Possession in Public Clouds. Transactions on
Services Computing, 6 (4), 551 – 559. IEEE Computer Society.
Hunsinger, S. and Corley, J. (2013). What Influences Students to Use Dropbox?
Journal of Information Systems Applied Research (JISAR), 6 (3), 18 – 25. Education
Special Interest Group (EDSIG).
Ivan, R., Christian, B., Christian, F., Robert, H., Shezaf, O. and Colin Watson. (2013).
SSL Server Rating Guide. Qualys SSL Labs. Retrieved October 3, 2013, from
https://www.ssllabs.com/projects/rating-guide/.
Jadeja, Y. and Modi, K. (2012). Cloud Computing-Concepts, Architecture and
Challenges. Proceedings of 2012 International Conference on Computing,
Electronics and Electrical Technologies (ICCEET). 21-22 March. Kumaracoil, 877-
880.
Jang-Jaccard, J., Manraj, A. and Nepal, S. (2012). Portable Key Management Service
for Cloud Storage. Proceedings of 2012 8th International Conference on
Collaborative Computing: Networking, Applications and Worksharing
(CollaborateCom). 14-17 October. Pittsburgh, 147-156.
Jansen, W. and Grance, I. (2011). Guidelines on Security and Privacy in Public Cloud
Computing. National Institute of Standards and Technology (NIST). Special
Publication. NIST Special Publication 800-14.
Janssen, C. (2010). Key Escrow. Retrieved August 20, 2014, from
http://www.techopedia.com/definition/3997/key-escrow.
Javaraiah, V. (2011). Backup for Cloud and Disaster Recovery for Consumers and
SMBs. Proceedings of 2011 IEEE 5th International Conference on Advanced
Networks and Telecommunication Systems (ANTS). 18-21 December. Bangalore, 1-
3.
Jeff, B. (2011). New - Amazon S3 Server Side Encryption for Data at Rest. Retrieved
July 20, 2014, from http://aws.amazon.com/blogs/aws/new-amazon-s3-server-side-
encryption/.
Jeff, B. (2011a). Client-Side Data Encryption for Amazon S3 Using the AWS SDK for
189
Java. Retrieved July 22, 2014, from http://aws.amazon.com/blogs/aws/client-side-
data-encryption-using-the-aws-sdk-for-java/.
Jeff, B. (2014). Use Your Own Encryption Keys with S3’s Server-Side Encryption.
Retrieved July 22, 2014, from http://aws.amazon.com/blogs/aws/s3-encryption-
with-your-keys/.
Jiang, W., Zhiming, Z. and Laat, C. (2013). An Autonomous Security Storage Solution
for Data-Intensive Cooperative Cloud Computing, Proceedings of 2013 IEEE 9th
International Conference on eScience (eScience). 22-25 October. Beijing, 369-372.
Jing-Jang, H., Chuang, H., Yi-Chang, H. and Chien-Hsing, W. (2011). A Business
Model for Cloud Computing Based on a Separate Encryption and Decryption
Service, Proceedings of 2011 International Conference on Information Science and
Applications (ICISA). 26-29 April. Jeju Island, 1-7.
Junjie, P., Xuejun, Z., Zhou, L., Bofeng, Z., Wu, Z. and Qing, L. (2009). Comparison
of Several Cloud Computing Platforms. Proceedings of 2009 2nd International
Symposium on Information Science and Engineering (ISISE). 26-28 December.
Shanghai, 23-27.
Kalpana, P. and Sudha, S. (2012). Data Security in Cloud Computing using RSA
Algorithm. International Journal of Research in Computer and Communication
Technology (IJRCCT), 1 (4), 143 – 146.
Kandukuri, R., Paturi, R. and Rakshit, A. (2009). Cloud Security Issues. Proceedings
of 2009 IEEE International Conference on Services Computing (SCC), 21-25
September. Bangalore, 517-520.
Karumanchi, S. (2010). A Trusted Storage System for the Cloud. Masters of Science in
College of Engineering, University of Kentucky, USA.
Keung, J. and Kwok, F. (2012). Cloud Deployment Model Selection Assessment for
SMEs: Renting or Buying a Cloud. Proceedings of 2012 IEEE 5th International
Conference on Utility and Cloud Computing (UCC). 5-9 November. Chicago, 21-
28.
Kevin, B. and Hanf, D. (2010). Cloud SLA Consideration for the Government
Consumers. The MITRE Corporation. Case Number 10-2902.
Khan, A. (2012). Access Control in Cloud Computing Environment, Journal of
Engineering and Applied Sciences (JEAS), 7 (5), 613 – 615. ARPN Publications.
Khan, K. and Malluhi, Q. (2010). Establishing Trust in Cloud Computing. IT
190
Professional, 12 (5), 20 – 27. IEEE Computer Society.
Kui, R., Cong, W. and Qian, W. (2012). Security Challenges for the Public Cloud.
Internet Computing, 16 (1), 69 – 73. IEEE Computer Society.
Kumar, S. and Dubey, N. (2013). Cloud Computing (A Survey on Cloud Computing
Security Issues and Attacks in Private Clouds). International Journal of Emerging
Trends in Engineering and Development (IJETED), 1 (2), 416 – 427. RS
Publications.
Le, X., Li, L., Nagarajan, V., Dijiang Huang. and Wei-Tek, T. (2013). Secure Web
Referral Services for Mobile Cloud Computing. Proceedings of 2013 IEEE 7th
International Symposium on Service Oriented System Engineering (SOSE). 25-28
March. Redwood City, 584-593.
Lin, Y., Hongli, Z., Jiantao, S. and Xiaojiang, D. (2012). Verifying Cloud Service
Level Agreement. Proceedings of 2012 IEEE Global Communications Conference
(GLOBECOM). 3-7 December. Anaheim, 777-782.
Ling, L., Lin, X., Jing, L. and Changchun, Z. (2011). Study on the Third-party Audit in
Cloud Storage Service. Proceedings of 2011 International Conference on Cloud and
Service Computing (CSC). 12-14 December. Hong Kong, 220-227.
Loewen, G., Galloway, M. and Vrbsky, S. (2013). Designing a Middleware API for
Building Private IaaS Cloud Architectures. Proceedings of 2013 IEEE 33rd
International Conference on Distributed Computing Systems Workshops (ICDCSW).
8-11 July. Philadelphia, 103-107.
Marshal, S. (2013). Secure Audit Service by Using TPA for Data Integrity in Cloud
System. International Journal of Innovative Technology and Exploring Engineering
(IJITEE), 3 (4), 49 – 52.
Marston, S., Zhi, L., Bandyopadhyay, S. and Ghalsasi, A. (2011). Cloud Computing -
The Business Perspective. Decision Support Systems (DSS), 51 (1), 176 – 189.
Elsevier.
Mazhelis, O., Fazekas, G. and Tyrvainen, P. (2012). Impact of Storage Acquisition
Intervals on the Cost-Efficiency of the Private vs. Public Storage. Proceedings of
2012 IEEE 5th International Conference on Cloud Computing (CLOUD). Honolulu,
646-653.
McRee, R. (2010). Web Security Tools: Skipfish and iScanner. Journal of Information
Systems Security Association (JISSA), 35 – 37. ISSA Publications.
191
Mell, P. and Grance, T. (2011). The NIST Definition of Cloud Computing, National
Institute of Standards and Technology (NIST). Retrieved June 2, 2013, from
http://csrc.nist.gov/publications/nistpubs/800145/SP800-145.pdf.
Michael, H. (2011). Security Recommendations for Cloud Computing Providers.
German Federal Office of Information Security (GFIS). Retrieved October 15, 2013,
from https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/
Publications/Minimum_information/SecurityRecommendationsCloudComputingPro
viders.pdf?__blob=publicationFile.
Milanov, E. (2009). The RSA Algorithm, University of Washington: Department of
Mathematics. Retrieved October 8, 2013, from http://www.math.washington.edu
/~morrow/336_09/papers/Yevgeny.pdf.
Mishra, N., Kanchan, K., Ritu, C. and Abhishek, C. (2013). Technologies of Cloud
Computing-Architecture Concepts based on Security and its Challenges.
International Journal of Advanced Research in Computer Engineering &
Technology (IJARCET), 2 (3), 1143 – 1149.
Murali, M., Kinnari, S. and Gunda, M. (2013). Enabling Secure Database as a Service
using Fully Homomorphic Encryption: Challenges and Opportunities. Cornell
University Computer Science Database. Arxiv: 1302.2654. 1-5.
Nepal, S., Friedrich, C., Henry, L. and Shiping, C. (2011). A Secure Storage Service in
the Hybrid Cloud. Proceedings of 2011 IEEE 4th International Conference on
Utility and Cloud Computing (UCC). 5-8 December. Victoria, 334-338.
Nepal, S., John, Z., Hon, H. and Moreland, D. (2007). Trust Extension Device:
Providing Mobility and Portability of Trust in Cooperative Information Systems. In.
International Conference on On the Move to Meaningful Internet Systems 2007:
CoopIS, DOA, ODBASE, GADA (pp. 253 – 271). Berlin Heidelberg: Springer.
Nirmala, V., Sivanandhan, R. and Lakshmi, R. (2013). Data Confidentiality and
Integrity Verification using User Authenticator Scheme in Cloud. Proceedings of
2013 IEEE International Conference on Green High Performance Computing
(ICGHPC). 14-15 March. Tamilnadu, 1-5.
Nithiavathy, R. (2013). Data Integrity and Data Dynamics with Secure Storage Service
in Cloud. Proceedings of 2013 International Conference on Pattern Recognition,
Informatics and Mobile Engineering (PRIME). 21-22 February. Salem, 125-130.
Nkosi, L., Tarwireyi, P. and Adigun, M. (2013). Insider Threat Detection Model for the
192
Cloud. Information Security for South Africa (ISSA), 1 – 8.
Omar M., Asif, K., Mahaboob, S. and Ramana, M. (2012). Secure Communication
using Symmetric and Asymmetric Cryptographic Techniques. International Journal
of Information Engineering and Electronic Business (IJEEB), 2 (6), 36 – 42. MECS
Publisher.
Oracle, (2013). GlassFish Server Open Source Edition Application Deployment Guide.
Release 4.0. Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA.
Pang Xiong, W. and Li, D. (2013). Quality Model for Evaluating SaaS Service.
Proceedings of 2013 4th International Conference on Emerging Intelligent Data
and Web Technologies (EIDWT). 9-11 September. Xian, 83-87.
Paul, R. and Shanmugapriyaa, S. (2012). Evolution of Cloud Storage as Cloud
Computing Infrastructure Service. Journal of Computer Engineering (JCE), 1 (1),
38 – 45. International Organization of Scientific Research.
Prerna, M. and Abhishek, S. (2013). A Study of Encryption Algorithms AES, DES and
RSA for Security. Global Journal of Computer Science and Technology Network,
Web & Security. 13(15). 14 – 22.
Pressman, R. (2010). Software Engineering: A Practitioner’s Approach. (Edition 7th
).
McGraw-Hill. Avenue of the Americas, New York, USA.
Puttaswamy, N., Christopher, K. and Ben, Z. (2011). Silverline: Toward Data
Confidentiality in Storage-intensive Cloud Applications. Proceedings of the 2nd
ACM Symposium on Cloud Computing. 26-28 October. Cascais, 1-13.
Raj, P., Venkatesh, V. and Rengarajan, A. (2013). Software Engineering Frameworks
for Cloud Computing Paradigm. (1st Edition). Springer-Verlag. Springer London
Heidelberg New York Dordrecht.
Rajasekar, N. and Chris, I. (2010). Exploitation of Vulnerabilities in Cloud Storage.
Proceedings of The 1st International Conference on Cloud Computing, GRIDs, and
Virtualization. 21-26 November. Lisbon, 122-127.
Ranchal, R., Bharat, B., Lotfi, B. and Lilien, L. (2010). Protection of Identity
Information in Cloud Computing without Trusted Third Party. Proceedings of 2010
IEEE 29th Symposium on Reliable Distributed Systems. 31 October – 3 November.
New Delhi, 368-372.
Rivest, R., Shamir, A. and Adleman, L. (1978). A Method for Obtaining Digital
Signatures and Public-Key Cryptosystems, Communications of the ACM, 21, 120 –
193
126. ACM.
Rocha, F. and Correia, M. (2011). Lucy in the Sky without Diamonds: Stealing
Confidential Data in the Cloud. Proceedings of 2011 IEEE/IFIP 41st International
Conference on Dependable Systems and Networks Workshops (DSN-W). 27-30
June. Hong Kong, 129-134.
Rocha, F., Abreu, S. and Correia, M. (2011). The Final Frontier: Confidentiality and
Privacy in the Cloud. Computer, 44 (9), 44 – 50. IEEE Computer Society.
Roshan, R., Rahul, D., Vaishali, S. and Saurabh, R. (2014). Assurance of Data Integrity
in Multi-cloud Using CPDP Scheme. International Journal of Engineering Research
and Applications, 4 (2), 262 – 267.
Salim, N., Mariyam, S., Safaai, D., Rose, A., Subariah, I., Roselina, S., Siti, Z., Azizah,
R., Dayang, J., Nor, Z. and Juhana, S. (2010). Handbook of Research Methods in
Computing. (1st Edition). Faculty of Computer Science and Information System.
Universiti Teknologi Malaysia, Johor Malaysia.
Sathiyapriya, K., Malathi, D., Vijaya, K. and Nagadevi, S. (2013). A Study on Security
Challenges and Issues in Cloud Computing. International Journal of Engineering
and Innovative Technology (IJEIT), 2 (7), 256 – 261.
Sattiraju, G., Mohan, S. and Mishra, S. (2013). IDRBT Community Cloud for Indian
Banks. Proceedings of 2013 International Conference on Advances in Computing,
Communications and Informatics (ICACCI). 22-25 August. Mysore, 74-81.
Savu, L. (2011). Cloud Computing: Deployment Models, Delivery Models, Risks and
Research Challenges. Proceedings of 2011 International Conference on Computer
and Management (CAMAN). 10-12 March. Wuhan, 1-4.
Seiger, R., Stephan, G. and Alexander, S. (2011). SecCSIE: A Secure Cloud Storage
Integrator for Enterprises. Proceedings of 2011 IEEE 13th Conference on
Commerce and Enterprise Computing (CEC). 5-7 September. Luxembourg, 252-
255.
Shucheng, Y., Cong, W., Kui, R. and Wenjing, L. (2010). Achieving Secure, Scalable
and Fine-grained Data Access Control in Cloud Computing. Proceedings of The
2010 30th International Conference on Computer Communications. 14-19 March.
San Diego, 1-9.
Somani, U., Lakhani, K. and Mundra, M. (2010). Implementing Digital Signature with
RSA Encryption Algorithm to Enhance the Data Security of Cloud in Cloud
194
Computing. Proceedings of 2010 1st Parallel Distributed and Grid Computing
(PDGC). 28-30 October. Solan, 211-216.
Soni, M., Namjoshi, J. and Pillai, S. (2013). Robustness and Opportuneness based
Approach for Cloud Deployment Model Selection. Proceedings of 2013
International Conference on Advances in Computing, Communications and
Informatics (ICACCI), 207-212 August. Mysore, 207-212.
Soni, S. and Soni, A. (2013). Brief Analysis of Methods for Cloud Computing Key
Management. Journal of Information Engineering and Applications (JIEA), 3 (6),
42 – 45. IISTE Publications.
Stamou, K., Jean-Henry, M., Benjamin, G. and Jocalyn, A. (2012). Service Level
Agreement as a Service: Towards Security Risk Aware SLA Management.
Proceedings of 2012 2nd International Conference on Cloud Computing and
Services Sciences (CLOSER). 18-21 April. Parto, 663-669.
Stefania, D., Alecsandru, P. and Emil, S. (2012). Homomorphic Encryption Schemes
and Applications for a Secure Digital World. Journal of Mobile, Embedded and
Distributed Systems. (JMEDS), 4 (4), 224 – 232.
Stipic, A. and Bronzin, T. (2012). How Cloud Computing is (not) Changing the Way
We do BI. Proceedings of the 2012 35th International Convention on MIPRO. 21-
25 May. Opatija, 1574-1582.
Sun, J. and Sha-sha, Y. (2011). The Application of Cloud Storage Technology in
SMEs. Proceedings of 2011 International Conference on E-Business and E-
Government (ICEE), 6-8 May. Shanghai, 1-5.
Sun, L., Zishan, D. and Guo, J. (2010). Research on Key Management Infrastructure in
Cloud Computing Environment. Proceedings of 2010 9th International Conference
on Grid and Cooperative Computing (GCC). 1-5 November. Nanjing, 404-407.
Syam, P. and R. Subramanian. (2011). An Efficient and Secure Protocol for Ensuring
Data Storage Security in Cloud Computing. International Journal of Computer
Science Issues (IJCSI), 8 (6), 261 – 274.
Taeho, J., Xiang-Yang, L., Zhiguo, W. and Meng, W. (2013). Privacy Preserving
Cloud Data Access with Multi-Authorities. Proceedings of 2013 IEEE INFOCOM.
14-19 April. Turin, 2625-2633.
Tripathi, A. and Mishra, A. (2011). Cloud Computing Security Considerations.
Proceedings of 2011 IEEE International Conference on Signal Processing,
195
Communications and Computing (ICSPCC). 14-16 September. Xian, 1-5.
Ullrich, M., Hagen, K. and Lassig, J. (2012). Public Cloud Extension for Desktop
Applications-Case Study of a Data Mining Solution. Proceedings of 2012 2nd
Symposium on Network Cloud Computing and Applications (NCCA). 3-4 December.
London, 53-64.
Ushadevi, R. and Rajamani, V. (2012). A Modified Trusted Cloud Computing
Architecture based on Third Party Auditor (TPA) Private Key Mechanism.
International Journal of Computer Applications, 58 (22), 1 – 9. IJCA Publications.
Vahid, A., Seyed, T. and Kamran, Z. (2012). A Survey on Cloud Computing and
Current Solution Providers. International Journal of Application or Innovation in
Engineering and Management (IJAIEM), 1 (2), 226 – 233.
Varalakshmi, P. and Deventhiran, H. (2012). Integrity Checking for Cloud
Environment using Encryption Algorithm. Proceedings of 2012 International
Conference on Recent Trends In Information Technology (ICRTIT). 19-21 April.
Chennai, 228-232.
Victor, M., Peter, M., Aida, O. and Gunka, A. (2013). Eliciting Risk, Quality and Cost
Aspects in Multi-cloud Environments. Proceedings of 2013 The 4th International
Conference on Cloud Computing, GRIDs, and Virtualization. 27 May – 1 June.
Valencia, 238-243.
Wang, B., Baochun, L., Hui, L. and Fenghua, L. (2013). Certificateless Public Auditing
for Data Integrity in the Cloud. Proceedings of 2013 IEEE Conference on
Communications and Network Security (CNS). 14-16 October. National Harbor,
136-144.
Wang, W., Yin, H., Chen, L., Huang, X. and Sunar, B. (2013). Exploring the
Feasibility of Fully Homomorphic Encryption. Transactions on Computers, 8 (99),
1 – 10. IEEE Computer Society.
Wei, L., Haishan, W., Xunyi, R. and Sheng, L. (2012). A Refined RBAC Model for
Cloud Computing. Proceedings of 2012 IEEE/ACIS 11th International Conference
on Computer and Information Science (ICIS). 30 May-1 June, Shanghai, 43-48.
Xiaoyong, L. and Junping, D. (2013). Adaptive and Attribute-based Trust Model for
Service Level Agreement Guarantee in Cloud Computing. Information Security, 7
(1), 39 – 50. Institute of Engineering and Technology.
Xing, W., Ming, W., Zhang, W. and Yike, G. (2012). Cloud Program with a Pricing
196
Strategy for IaaS in Cloud Computing. Proceedings of 2012 IEEE 26th
International Parallel and Distributed Processing Symposium Workshops & PhD
Forum (IPDPSW). 21-25 May. Shanghai, 2316-2319.
Yen-Hung, K., Yu-Lin, J. and Juei-Nan, C. (2013). A Hybrid Cloud Storage
Architecture for Service Operational High Availability. Proceedings of 2013 IEEE
37th Annual Computer Software and Applications Conference Workshops
(COMPSACW). 22-26 July. Japan, 487-492.
Yogesh, K., Rajiv, M. and Harsh, S. (2011). Comparison of Symmetric and
Asymmetric Cryptography with Existing Vulnerabilities and Countermeasures.
International Journal of Computer Science and Management Studies (IJCSMS), 11
(2), 60 – 63.
Yu-Hui, W. (2011). The Role of SaaS Privacy and Security Compliance for Continued
SaaS Use. Proceedings of 2011 7th International Conference on Networked
Computing and Advanced Information Management (NCM). 21-23 June. Gyeongju,
303-306.
Zeng, S. and Xu, J. (2010). The Improvement of PaaS Platform. Proceedings of 2010
1st International Conference on Networking and Distributed Computing (ICNDC).
21-24 October. Hangzhou, 156-159.
Zhang, J. and Zhang, N. (2011). Cloud Computing-based Data Storage and Disaster
Recovery. Proceedings of 2011 International Conference on Future Computer
Science and Education (ICFCSE). 20-21 August. Xian, 629-632.
Zingham, M. and Saqib, S. (2013). Software Engineering Frameworks for Cloud
Computing Paradigm. (1st Edition). Springer-Verlag. Springer London Heidelberg
New York Dordrecht.
Zissis, D. and Lekkas, D. (2012). Addressing Cloud Computing Security Issues. Future
Generation Computer Systems (FGCS), 28 (3), 583 – 592. Elsevier.
Zlatko, S., Eva, L., Antonio, C., Marcos, O. and Vjeran, S. (2012). Performing
Systematic Literature Review in Software Engineering. Proceedings of 2012
Central European Conference on Information and Intelligent Systems. 19-21
September. Croatia, 441-447.
197
APPENDIX A
PAPERS PUBLISHED DURING THE AUTHOR’S CANDIDATURE
Published and presented several research papers in international conferences
as well as journals as an author and co-author during the entire period of study.
These papers have been cited by 40 researchers. The list of publications is
mentioned as follows:
Journals
Sarfraz Nawaz Brohi, Mervat Adib Bamiah, Suriayati Chuprat, and Jamalul-lail Ab
Manan, 2013. Design and Implementation of Privacy Preserved Off-Premises Cloud
Storage. Journal of Computer Science, 10(2): 210-223.
Mervat Adib Bamiah, Sarfraz Nawaz Brohi, Suriayati Chuprat and Jamalul-lail Ab
Manan, 2013. Trusted Cloud Computing Framework for Healthcare Sector. Journal
of Computer Science, 10(2): 240-250.
The Two Publications Mentioned Above are Indexed by Scopus with Impact Factor.
Sarfraz Nawaz Brohi and Mervat Adib Bamiah, 2011. Challenges and Benefits for
Adopting the Paradigm of Cloud Computing. International Journal of Advanced
Engineering Sciences and Technologies (IJAEST), 10(2): 286-290.
198
Mervat Adib Bamiah and Sarfraz Nawaz Brohi, 2011. Exploring the Cloud
Deployment and Service Delivery Models. International Journal of Research and
Reviews in Information Sciences (IJRRIS), 1(3): 77-80.
Sarfraz Nawaz Brohi and Mervat Adib Bamiah, 2011. Exploit of Open Source
Hypervisors for Managing the Virtual Machines on Cloud. International Journal of
Advanced Engineering Sciences and Technologies (IJAEST), 9(1): 55-60.
Mervat Adib Bamiah and Sarfraz Nawaz Brohi, 2011. Seven Deadly Threats and
Vulnerabilities in Cloud Computing. International Journal of Advanced Engineering
Sciences and Technologies (IJAEST), 9(1): 87-90.
Sarfraz Nawaz Brohi, 2011. A Trusted Virtual Private Space Model for Enhancing
the Level of Trust in Cloud Computing Technology. International Journal of
Research and Reviews in Information Sciences (IJRRIS), 1(3): 74-76.
Conference Papers (Indexed by Scopus)
Sarfraz Nawaz Brohi, Mervat Adib Bamiah, Muhammad Nawaz Brohi, and
Rukshanda Kamran, 2012. Identifying and Analyzing Security Threats to Virtualized
Cloud Computing Infrastructures, Presented in International Conference on Cloud
Computing Technologies, Applications and Management (ICCCTAM), pp. 151-155.
Published in IEEE Xplore Digital Library.
Sarfraz Nawaz Brohi, Mervat Adib Bamiah, Suriayati Chuprat, and Jamalul-lail Ab
Manan, 2012. Towards an Efficient and Secure Educational Platform on Cloud
Infrastructure. Presented in International Conference on Cloud Computing
199
Technologies, Applications and Management (ICCCTAM), pp. 145-150. Published in
IEEE Xplore Digital Library.
Mervat Bamiah, Sarfraz Brohi, Suriayati Chuprat, and Jamalul-lail Ab Manan. 2012.
A Study on Significance of Adopting Cloud Computing Paradigm in Healthcare
Sector. Presented in International Conference on Cloud Computing Technologies,
Applications and Management (ICCCTAM), pp. 65-68. Published in IEEE Xplore
Digital Library.
Mervat Bamiah, Sarfraz Brohi, Suriayati Chuprat, and Muhammad Nawaz Brohi,
2012. Cloud Implementation Security Challenges. Presented in International
Conference on Cloud Computing Technologies, Applications and Management
(ICCCTAM), pp. 174-178. Published in IEEE Xplore Digital Library.
Mervat Adib Bamiah, Sarfraz Nawaz Brohi and Suriayati Chuprat, 2012. Using
Virtual Machine Monitors to Overcome the Challenges of Monitoring and Managing
Virtualized Cloud Infrastructures. Presented in International Conference on Machine
Vision (ICMV), pp. 187-19. Published in SPIE Digital Library.
200
APPENDIX B
CERTIFICATES OBTAINED DURING THE AUTHOR’S CANDIDATURE
In order to enhance the preliminary knowledge in the field of cloud computing
and information security, author obtained several certificates from well-known
organizations such as IBM, Rackspace, CSA and EC-Council. The list of certificates
achieved is mentioned as follows:
IBM Certified Solution Advisor - Cloud Computing Architecture
Version-1.
IBM Certified Solution Architect - Cloud Computing Infrastructure
Version-1.
CSA Certified Cloud Computing Security Knowledge.
Certified CloudU from Rackspace Hosting.
EC-Council Certified Ethical Hacker Version-7.
Health Informatics in Cloud Certified from Georgia Institute of
Technology, USA.
201
APPENDIX C
SURVEY DESIGN AND DELIVERY
The survey used for this research was delivered to experts using the following
introductory message.
Hello Dear Sir/Madam,
Greetings and good day to you. This short survey is conducted as
partial requirement for completing Doctor of Software Engineering research from
Advanced Informatics School (AIS), Universiti Teknologi Malaysia (UTM). The
survey requires feedback from experts like you. Appreciating your allocated time
and efforts, we are delighted to provide you a free e-book of cloud computing
security and privacy, on receiving your response.
Kindly feel free to provide us your feedback, your private information will be kept
strictly confidential, and used only for analysis of this survey.
Please click on following link or copy to your browser to start the survey.
https://www.surveymonkey.com/s/cloud-storage-service
Thank you very much for the participation!
Design of survey was based on following five questions.
202
Question 1: Please provide your name and email address (Optional, but your info
will help us to validate the survey results and to send you the e-book).
Name
Email address
Question 2*: Organizations dealing with confidential data are reluctant to use
remotely located third party cloud storage services, due to emerging data
confidentiality and integrity concerns.
Question 3*: In order to overcome the information confidentiality and integrity
concerns for organizations dealing with confidential data to use cloud storage
services and to ensure delivery of trusted cloud storage services to the clients, the
undertaken research designed, developed and deployed a secure cloud storage model
consisting of the following processes:
Each data cell is encrypted and decrypted by the client directly from the
upload and download stream using RSA partial homomorphic cryptography.
Client is able to perform operations (insertion, updating, deletion, and limited
computations) on data while it remains encrypted at the cloud storage.
The client generates VMD using SHA-1 and sends it to the TTP as an
encoded sound file which can only be decoded by the TTP to perform
auditing services. Considering key management best practices, client sends
secret decryption keys to TTP for secure storage. Keys are automatically
encoded in small sound files while uploading. Only the client is privileged to
decode the sound file containing the secret keys.
203
TTP performs requested auditing services. If data are violated, it will be
recovered from the backup zone by requesting cloud admin. TTP will send
safe signal to the client if final data status is intact, else actions will take place
by considering the SLA, described in Question-5.
The developed model is deployed on a cloud computing infrastructure by
implementing 256-bit SSL. Access records log file is maintained for
detecting violations or fake claims from the malicious users.
Operations are only performed by authorized and privileged users due to
implementation of multi-factor authentication and authorization process using
RBAC with CRSCG, described in Question-4.
Feedback: This model can be considered as one of valuable contributions in
the field of cloud security to enable the organizations dealing with confidential data
to acquire and use cloud storage services with trust and without data confidentiality
and integrity concerns.
Question 4*: Using the developed system, after the initial successful login by
entering the username and password, a controlled environment which is governed
using RBAC will be created. It will allow a user to perform all those operations
which are granted to his/her role. In order to perform an operation, user will request
the cloud server and CRSCG component will generate a random 12 characters secret
code made from set of upper and lower case alphabets, special symbols, numbers and
all other characters on a standard keyboard. This secret code will be sent to the
requesting user via HTTPS connection as an email alert to perform desired
operation(s). Secret code can be used for the entire session or until the user requests
a fresh one.
204
Feedback: The proposed multi-factor authentication and authorization
process is secure and it will provide an additional layer of security for the system,
since an illegal authority cannot perform any privileged operation even if the
username and password of a user are compromised.
Question 5*: Beside the technical issues, the limitation of having an effective SLA
is also a major concern for organizations not to use cloud storage services. This
research designed an SLA which focuses on including security requirements of
organizations requiring consistent data confidentiality and integrity. Key elements of
the proposed SLA are defined as follows:
CSP will provide physical and logical security to avoid illegal access, theft or
tampering of data from outsider or malicious insider attacks, they should also
allow third party security audits to validate their security controls and
standards used to protect client’s data on cloud.
Data will be encrypted with public key cryptography techniques and located
within permitted backup zones. Decryption keys will be provided only to
client’s legal admin. Client's data will not be revealed to any unauthorized
party, not even the CSP while at storage or during any operation.
CSP will facilitate efficient key management services which include secure
generation, use, safe storage and destruction of keys. TTP will store client’s
keys in a secure manner and provide to client whenever requested.
Conducting audit service, generating auditing reports and storing client’s
keys with redundant backups and safety will be the responsibilities of TTP.
Auditing reports will be shared with client and CSP.
205
If client’s data are violated from an outsider or a malicious insider, CSP will
immediately report to the client. The penalty will be imposed on CSP by
considering the data violation, which can be in the form of cash amount or
free service as settled by client and CSP according to sensitivity of data and
the impact or nature of the violation. If client found to be malicious, CSP
may stop service with client and impose cash penalties. This research does
not assume any fixed penalty because it may vary for different organization
types such as healthcare, education or banking.
The client must be permitted to exit from CSP when the guarantees can’t be
met at numerous times. When an organization exits, all their data must be
provided to them with maintained confidentiality and integrity. The CSP
must also remove their data on storage disks and associated backup devices at
each location.
Feedback: When the above SLA elements are merged with actual SLAs
offered by the well-known CSPs, it will enhance the clients’ level of trust for using
remotely located cloud storage services to store their confidential data.