Resilient Cloud Data Storage Services Hemayamini Kurra, Youssif Al-Nashif, Salim Hariri
NSF Center for Cloud and Autonomic Computing
The University of Arizona
{hemayaminikurra, jbjan, hariri}@email.arizona.edu
ABSTRACT
With the advance of cloud computing technologies, there is a
huge demand for computing resources and storage. Many
organizations prefer to outsource their storage and other
resources. As the data reside on the third parties data centers,
security is becoming a major concern. In this paper we propose a
Resilient Cloud Storage (RCS) architecture that addresses the
major security issues for cloud storage such as access control
confidentiality, integrity, and secure communications. Our
resilient approach is based on moving target defense and key
hopping techniques. Data is partitioned into a random number of
partitions where different keys are used to encrypt each partition.
We also show that by using key hopping technique, we can reduce
smaller key length that is normally used to improve performance
without compromising the security. Our experimental results
show that we can improve performance by 50% when we use a
key of length 512 when compared with certificate technique that
uses key length of 2048.
Categories and Subject Descriptors
D.4.2 [OPERATING SYSTEMS (C)]: Storage Management –
Distributed memories. D.4.6 [OPERATING SYSTEMS (C)]:
Security and Protection– Access controls, Authentication,
Cryptographic controls. D.4.8 [OPERATING SYSTEMS (C)]:
Performance– Measurements. E.3 [DATA ENCRYPTION]:
Data encryption standard (DES), Public key cryptosystems,
Standards (DES).
General Terms
Algorithms, Management, Measurement, Performance, Design,
Security, Languages.
Keywords
Cloud Computing, Cloud Storage, Resilience, Encryption,
Decryption.
1. INTRODUCTION As the networking technologies are advancing there is an
increasing need for computing resources and storage. The world’s
information is doubling every two years. 48 hours of video
uploaded to YouTube every minute. The need for the cloud
storage is increasing dramatically over years. A recent report
released by IHS isuppli says that a solid increase of 25% in the
cloud storage subscription is expected in 2013. In addition to that
an uninterrupted double-digit growth anticipated to follow until at
least 2017. The average annual run rate of IP traffic by the end of
2015 is estimated as 966 Exabytes worldwide [29]. 30% of
Vendors use cloud services for storage [33].
Cloud storage has become a business solution for remote storage
and backup of data, as it offers infinite storage space for clients
(enterprises/individuals) in a pay-as-you-go manner [1] [2]. For
example, Ericsson, a major provider of technology and services to
telecom operators, uses Amazon Elastic Compute Cloud (Amazon
EC2), Amazon Simple Storage Service (Amazon S3), and the
Right scale Cloud Management Platform for provisioning and
auto-scale functionality. Similarly, Shaw Media uses Amazon
Web Services (AWS) to improve uptime for its high-traffic
websites and also to implement a disaster recovery strategy that
resulted in a $1.8 million saving; the cost for a establishing a
second physical site. There is an increased interest and
deployment of cloud services that use Amazon web services for
many services (application/ backup and storage/ computation/
networking) [3]. On the other hand, cloud storage tools like
Dropbox, cloudme.com allow individuals to store their data on the
cloud and access it from any computer or mobile device with
Internet access. Devices with limited storage like mobiles and
tablets prompt users to store their audio/video files on cloud.
Cloud infrastructure can be either public or private. In a private
cloud, the infrastructure is located under the control of the
customer. Whereas in public cloud, the infrastructure is controlled
and managed by cloud infrastructure providers and this leads to a.
significant security and privacy risks. According to the Future of
Cloud Computing Survey 2011, the main inhibitor to cloud
adoption is security [33]. 43 percent of companies globally
currently using a cloud computing service reported a data security
lapse or issue with the cloud service their company is using within
the last 12 months [32]. 15% of the data centers don’t have data
backup and recovery plans [30]. The cost of a datacenter outage is
calculated as Average of $505,502 per incident [31]. The biggest
problem in the adoption of cloud storage is the concern over the
confidentiality and integrity of their data [4]. In this paper, we
present and evaluate a resilient cloud storage service to overcome
the security and privacy issues. The RCS architecture solves two
main security problems: 1) Access control that ensures that only
authorized users can access cloud data; and 2) Secure
communications that prevent any data leakage while the data is in
transit. Our experimental results and evaluation of our approach
shows that 50% improvement in performance can be achieved
along with secured data services by using a reduced key length of
512 bits.
The remaining sections of the paper are as follows. In Section 2,
we present background information and summarize related work.
In Section 3, we describe the RCS and our implementation
approach. In Section 4, we describe our experimental environment
and evaluate the performance and overhead of the RCS services.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
CAC’13, August 5–9, 2013, Miami, FL, USA.
Copyright © 2013 ACM 978-1-4503-2172-3/13/08...$15.00
2. RELATED WORK Cloud security suffers from a wide range of cyber attacks such as
those target physical machines as well as the cloud virtualized
environments [5][6][7][8]. In addition, one of the main security
issues in cloud computing is the insider attacks. With exchange of
cloud data between different organizations, the risk of insider
attacks increases.
Data security is a major concern for cloud consumers and
providers and that resulted in the development of a wide range of
techniques to solve the cloud data security problems [9] [10] [11].
Data security can be addressed from different perspectives: Data
locality, Data Integrity, Data Segregation, Data Confidentiality,
and Data Breaches. In what follow, we briefly describe each of
these concepts.
Data locality in cloud means that the cloud providers should have
the ability to control the data location in order to satisfy the
customer’s preference on the data storage locations and
boundaries. The data locality in cloud is important, different
organizations or countries have different regulations and
limitations about privacy and data locality.
Data integrity is another important cloud security challenge which
means that the data is trustworthy during its life cycle. Also, since
the data may be replicated in multiple places across cloud’s
datacenters, any change in the data has to be propagated
throughout all replications.
Data segregation is another security requirement for cloud since
the data from different customers reside at the same location
(multi-tenancy). Therefore, the intrusion into a user’s data by
adjacent users is possible. This kind of intrusions can be
performed either by hacking the application or through client code
injection (e.g. SQL code injection) [37]. Data access is an
important cloud security metric. Each customer has his/her own
access policy which has to be applied on his/her own data. The
cloud access control model should be able to manage access to the
data from inside and outside of the customers’ organizational
boundary. Proper access control mechanism is needed to protect
the customers’ data from the unauthorized users. It also should be
able to define accessible part of data for each user.
Data confidentiality and privacy are among the major concerns of
cloud customers. In fact, by adopting cloud computing, the
customers are disclosing their data to the cloud providers. The
main concern here is how the cloud providers treat customer’s
confidential data.
Data breaches can also threaten the cloud consumers. Since the
customers’ data are uploaded to the cloud, any breach in the cloud
environment potentially threatens all the customers. This makes
the cloud a high value target for outsider attackers. In addition the
insider attacks still remain a high risk threat from employees
inside the cloud provider which potentially have access to
customers’ information.
In [12] the authors presented a Cloud Security Storage system
(CSSS) that provides integrated information isolation, access
control, virus detection, and metadata safeguard of crucial data. In
their CSSS architecture, sensitive data (key data) are slice stored
in different storage nodes and metadata gets special protection,
such as stored in dedicated metadata server with encryption. Their
architecture has a request-classification sub module which
classifies the requests to either read or write requests and sends
them to read-processing sub module and write processing sub
module. A similar approach is adopted in our architecture where
the read, write/append requests are separated from authentication
requests.
In [13] the authors proposed an architecture which includes three
services: data isolation service, secure Intra-cloud data migration
service and secure Inter-cloud data migration service under the
environment of a Private Storage Cloud extended with a
partner/public cloud. Data isolation service appears when a
company has to put some of its data on the cloud which is shared
by other users as well. This data should be isolated from other
user’s data. The other two services deal with the security of the
data migration either inside the cloud or between the private cloud
and public cloud.
In [14] the authors conducted a survey on different cloud storage
security issues and recommended that the data security should be
taken care in all data life cycles: Data-in-transit, Data-at-rest, Data
lineage, Data remanence, and Data provenance. In our approach
for secure communications data in transit is taken care by the
channel encryption technique.
In [15] the authors explained how encryption is important to a
database. They also proposed an encryption mechanism for data-
in-motion and data-at-rest. They also proposed a hybrid solution
to boost up the performance of a database system when data
encryption is a requisite. They followed a master key- local key
pair approach for their encryption algorithm.
In [16], a complete overview of the performance of all the major
encryption algorithms is presented. It showed that Blow fish
algorithm is the fastest but suffers from weak key problem
whereas triple DES is the slowest algorithm. Their results show
that DES encryption algorithm has the tradeoff between
performance and security. In our approach DES is used in CFB
(Cipher Feedback) mode [25].
In [35], the top threats to cloud computing given by CSA (Cloud
Security Alliance) states that Insecure interfaces and API’s is one
of the top threats to cloud computing. Ensuring strong
authentication and access controls in addition to encrypted
transmission is one of the remedies. Thus, the data which is in
transit is more vulnerable to attacks when compared to the data
which is at rest.
In this paper, we presented a resilient approach to secure
communications between cloud and storage server using DES
encryption algorithm and how to apply key hopping techniques to
file partitions. We also show that by using shorter keys in key
hopping we can achieve better performance and security than that
traditional method that uses a long key with no hopping.
3. ARCHITECTURE: The resilient cloud storage services are implemented as shown in
Figure 1. When a client requests to use the cloud data storage
services such as reading, writing a file, the SM (Self-
Management) module (SMM) initiates the secure communication
by checking the authentication of the client. The CA certificates
[17] are used to verify both the client and the SM module. At this
point if the authentication fails, the client is then added to the
blocked list by the SM module until the client authenticity is
verified. The secure communication between client and the cloud
storage is implemented in three steps. CA certificate verification
is the first step while the Diffie-Hellman (DH) key exchange
protocol and key hopping with file partitioning are the second and
third steps, respectively. This is explained more in detail in
section 3.6. The SM module initiates the DH key [18] generation
algorithm between the SMA (Storage management agent) and the
SM module. Using the DH key exchange protocol (explained in
section 3.3) the public and private key pair is generated. Once the
key is generated it is distributed to the client. The possibility of
key being compromised is more in this step. So in order to
overcome this, CA certificate verification is done. Now as the
client will be sure about the sender of the key, man-in-the-middle
attack is extremely difficult. This client key distribution is clearly
explained in section 3.5. Once the key is received by the client,
the communication between the client and the data server is
encrypted in two layers. In the first layer, the files that are
transmitted between client and the server are divided in to parts
and each part is encrypted using DES (Data Encryption Standard)
algorithm. These DES keys used in encryption should be known
to the Data server for decryption of the file parts. And hence these
keys should be sent to the server. In order to make sure that the
DES key is not compromised while transmission between client
and the data server, the DES key is not sent in its raw form instead
it is encrypted using RSA algorithm, using the public and private
key pair generated using DH key exchange protocol. Thus the
data is encrypted using DES algorithm and the DES keys used are
encrypted using RSA algorithm. For an attacker to successfully
attack the system and steal the data in transit in a particular time
window, he should know the exact file part and the DES key that
is used to encrypt that file part and the RSA key that is used to
encrypt the DES key. In the very worst case, when the attacker
found out all the above variables (which is nearly impossible)
he/she will only get to see the data in that particular time window.
In the next time window, all keys changes again.
A sequence of random shorter keys will be used where each key
will be active for a random period of time in a similar manner to
frequency hopping in wireless networks [19]. The length of the
time window to be used for each key is determined by the SM
module. When a small key is used, the time window should be
small and it can be hopped (key hopping is discussed in Section
3.4) several times in order to make the system secure and resilient
to attacks; the attackers will have less time to figure out the key
and by the time they might be able to discover it, it will be
changed to another key.
Thus using this architecture we proved that using shorter length
keys improves the performance and due to the diversity provided
in the number of key hops, variable time window length, variable
file partitions compromising the system becomes impossible.
The encryption algorithm used here is DES (Data Encryption
Standard) [20] in CFB mode (Cipher Feedback) for data and RSA
algorithm for encrypting the DES keys [25]. The main
components to implement the resilient storage service are
discussed in further details next.
Figure 1: Architecture for Resilient Data Storage services.
3.1 Self-Management Module: The SM architecture is based on our autonomic computing
environment (Autonomia) [21] [22] [23]. The SM main functions
implemented in two software modules (see Figure 2): Observer
and Controller modules. The Observer module monitors and
analyzes the current state of the managed cloud resources or
services.
Figure 2: Self-Management Architecture
The Controller module is delegated to manage the cloud
operations and enforce the resilient operational policies. In fact,
the Observer and Controller pair provides a unified management
interface to support the SM’s self-management services by
continuously monitoring and analyzing current cloud system
conditions in order to select the appropriate plan to correct or
remove anomalous conditions once they are detected and/or
predicted. Figure 3 shows the self-management algorithm to
implement the observer and control functions.
Fig.3. Self-management algorithm
The main functions for self-management module in RCS model
are keeping track of the client certificates and authenticating them,
adding them to the access control list, deciding the time window
for key hopping and managing the file partitions and their
respective keys.
3.2 Secure Communications: When a client wants to access the cloud services, the Self-
Management module starts a timer and initiates the DH key
generation protocol between the client and SM. In order to avoid
the Man-in-the middle attack, the server certificates are verified
by the client to make sure that it is receiving the correct key from
correct sender [24]. This key is generated by the client system to
prove its identity to the cloud provider. The access control list is
then updated and communication starts between the client and
cloud system. The SM takes care of channel encryption and key
hopping during random sequence of several time windows. The
channel is encrypted using DES (Data Encryption Standard) in
CFB64 (Cipher Feedback) mode [25]. In this CFB mode, the first
8 bytes of the key generated using the DH algorithm is used to
encrypt the first block of data. This encrypted data is then used as
a key for the second block. This process is repeated until the last
block is encrypted. The key generated once will be valid only for
that particular time window and whenever the key time expires
the SMM (Self-Management Module) will again launch the DH
key protocol. The details about DH algorithm, Key hopping, client
key distribution and key life cycle are discussed in further detail
in later sections.
3.3 DH system The communication between the client and cloud services is
encrypted using Diffie-Hellman (DH) key generation algorithm
[18]. Diffie-Hellman is a key exchange algorithm based on
modulo arithmetic that can be used to securely exchange keys
between two systems that don't share any mutual keys. The two
systems create a shared secret over an insecure communications
channel. Simply transmitting a symmetric cipher key is clearly
inadequate because anyone reading the traffic could use the key to
decrypt anything encoded with it. After agreeing on a large
modulus (m) and a common base number (x), each side picks a
random number (its local secret); computes x to that power (mod
m) and transmits this. Upon receiving this number, the other side
raises it to its local secret power, and has computed x to the
product of the two powers. Anyone snooping on the wire sees the
two partial powers go passed, but without the secret powers can't
compute the shared secret.
The master key generated by the SM module is sent to client
through SSL session (see step 2-3 in Fig 5). The client then uses
the key for DES encryption (see step 30-32 figure 6). This shared
secret is converted to the DES key and is used for further
communication between the client and the only client who knows
the actual key can see the data. In the worst case if the key is
compromised the data in that time window is only compromised
because in the next time window the key will be different. The
DH system is implemented in C language and uses the OpenSSL
library functions.
Figure 4: Diffie-Hellman Key exchange protocol
Figure 5: DH algorithm for key generation and exchange
3.4 Key Hopping: Using the same key for a long time is not secure and incurs high
overhead. To overcome this problem, we use shorter keys to
reduce the time it takes to encrypt data, but change them randomly
as it is done in frequency hopping in order to increase the security
of the storage service. The SM module keeps track of the time
window and triggers the client and the server at the starting of the
time window and when the time window ends. Thus the client and
server follow the time window provided by the SM module (steps
12 and 28 in figure 6). Any abnormal behavior by the client and
server is monitored and taken care by the observer controller
mechanism in SM Module. Once the time window ends, the keys
that are used during that period will expire and SM initiates the
generation of keys and distribution of them to various Storage
Management Agents. According to [36], it takes 73 days for a
single Dual-core PC to crack a single RSA 512 bit key. As the
distributed computing is in high use, let’s assume that the attacker
is using 10 computers to crack the password, it still takes 7-8
days. To make the statement strong let’s assume that the attacker
is using 100 computers to crack the password, it will be down to
about 1 day. Thus, a 512 bit RSA key is proved to be unsecure if
it is used for a long time. Here, we considered a time window of
4.8 hours which is very less time for an attacker to crack the key.
Our experimental results(presented in section 4) validates this
argument.
3.5 Client key Distribution We use OpenSSL (Secure socket Layer) [26] for the
communication between SM module and client. The SM
certificates are verified on the client side to make sure that the key
is received from the right sender. Once the certificate is verified a
secure socket is created for further communication. When the SSL
session is established, the key is encrypted using md5 ciphers
[27]. A private and public key pair is generated and the public key
is announced to the SM module. But the party which has the
private key can only decode the entire key. After creating this
secure channel the client obtains the key to decrypt the data. This
key is then used to prove the clients identity to the SM module.
After successful authentication, the client will be added to the
access control list. In further communication, if the client fails to
prove its authenticity, the SM module will block its connection to
the cloud until the problem is resolved.
The algorithm to implement the resilient storage service is shown
in Figure 6. The variables mentioned in the algorithm are self-
explanatory. Initially all the timers are set and SM module
launches the DH key generator between SM and SMA (see step 1-
5, figure 6). Once the key is generated both the subsystems will
wait for client connections (see step 6-9, figure 6). The client
request first goes to the SM module. The SM will then share the
computed key and its key time window (see step 1-5 figure 6).
Figure 6: Resilient Storage Service Algorithm
3.6 File Partitioning: In May 2011, a popular file sharing service Dropbox was accused
in a complaint to the Federal Trade Commission of using “a single
encryption key for all the user data the company stores.” The
concern is that if a hacker was able to break into Dropbox’s
servers and obtain the key, it could gain access to all of the
Dropbox’s user data [28]. So to improve the resiliency of stored
data, it is important to partition data into several parts and use
different keys for each data partition. This increases the overhead
but it adds one more layer of security that attackers must
overcome within a short period of time to succeed in accessing the
data.
Figure 7: Algorithm for file portioning
The above Figure 7 shows the algorithm for encrypting the file by
partitioning it and encrypting each part with a different key.
4. Resilient Data Storage Service Evaluation: In our experimental evaluation testbed, the storage servers are
implemented using a cluster of virtual machines running on
different nodes of an IBM Blade system. Storage server 1 is
implemented using Ubuntu 10.04 Linux operating system that
runs on all its virtual machines that use the Hadoop Distributed
File system. OpenSSL is used for establishing secure
communication channel between SMA and storage systems. The
SSL server provides the CA (Certificate Authority) certificate and
Server certificate. Similarly the client system providers the CA
certificate and client certificate. Whenever a client is requesting to
connect to a server, the client certificate which is already signed
by that server is verified by the CA and if the validation passes the
communication is established.
To evaluate the performance gain that can be achieved from using
the key hopping technique, let us assume that there are 2000
sessions that use the resilient storage service. In our test-bed, the
average time to run an SSL processing time with 1024 bit key is
approximately around 4s. To run the RDS algorithm, the
execution times of its components if we assume the key size is
1024 and number of hops 5 are as follows: DH Protocol time is 11
s, Key distribution time is 2 s, and DES encryption- decryption is
1 s.
To quantify the performance gain from using key hopping, we
introduce the Performance Improvement Factor (PIF), which can
be computed as:
RTssl = No of sessions * Time taken for SSl protocol
RTRCS = (TDH protocol + Tkeydistribution)*No of hops + (No of
sessions * Time for DESen+decryption )
Where,
RTssl is the response time for system only with SSL.
RTRCS is the response time for the system with RCS
implementation.
TDH protocol is time of execution for DH protocol.
Tkeydistribution is the time taken for client key distribution.
Based on our assumption in terms of number of sessions, the
performance improvement (PIF) is at 74%.
Figure 8 shows the overall overhead time for RDS with respect to
the number of hops. It is clear that as the number of hops
increases the overhead increases.
Figure 8: Performance overhead Vs (Key size and
number of Hops)
We also quantify the performance gain that can be achieved in
encrypting different file sizes as shown in Figure 9. We compared
the performance of using static key of size 2048 bits v.s. using a
key hoping technique with 512 bits with two hops.
The performance improvement factors are calculated as follows
using the above mentioned assumptions.
Where,
RT2048ssl = Response time with 2048 bit ssl and no hops.
RT512ssl = Response time with 512 bit key, 2 hops using RDS
approach.
The performance improvement factors for file sizes 256 MB, 64
MB and 1 MB are 65.5%, 73.9% and 73.01% respectively.
Figure 9: File Size Vs overhead time for different keys.
Figure 10 shows that how the overhead increases as we increase
the number of parts for file partitioning. With a 512 bit long key
and 2 hops, the overhead is almost 50% less than the overhead we
get using a 2048 bit long key with no hopping. This shows that
by using smaller keys, we can improve the performance (50%
with 2 hops as shown in Figure 10) while improving the security
of the system by adding three security layers; for the attacker to
succeed, he/she needs to know the number of partitions, the keys
used in each interval, and the length of each time window.
Figure 10: File parts Vs overhead time for different keys.
5. CONCLUSION In this paper, we presented an overview of cloud storage security
issues and our approach to implement Resilient Cloud Storage
Services (RCSS). The RCSS architecture can overcome most of
the storage security challenges with less overhead. We use the DH
key generation and key hopping technique to secure the
communications between clients and cloud services. RCSS
implements time-modulated key hopping architecture which
makes it extremely difficult for an attacker (external or insider) to
succeed in launching attacks against the cloud storage system or
the ability to access or manipulate the stored cloud data. The self
management capability is used to provide the automated
configuration management in order to provide the required
resilient operations against any type of attacks. We have evaluated
the performance of the RCSS architecture to provide resilient
storage services. Our evaluation shows that 50% improvement in
performance can be achieved along with secured data services by
using a reduced key length (512 bits) unlike the traditional
approach of using long key (2048 bits) for encryption.
6. Acknowledgements: This work is partially supported by AFOSR DDDAS award
number FA95550-12-1-0241, and National Science Foundation
research projects NSF IIP-0758579, CNS-0855087 and IIP-
1127873.
7. REFERENCES [1] M. Armbrust, A. Fox, R. Griffith, A.D. Joseph, R. Katz, A.
Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and
M. Zaharia, “A View of Cloud Computing.” Comm. ACM,
vol. 53, no. 4, pp. 50-58, Apr. 2010.
[2] Yang Tang, Patrick P.C. Lee, John C.S. Lui, Radia Perlman,
“Secure Overlay Cloud Storage with Access Control and
Assured Deletion”, IEEE transactions on dependable and
secure computing, vol. 9, no. 6, November/December 2012.
[3] Amazon, “case studies,”
http://aws.amazon.com/solutions/case-studies/#backup,
2012.
[4] Seny Kamara, Kristin Lauter, “Cryptographic Cloud
Storage”, Microsoft Research.
[5] https://cloudsecurityalliance.org/research/secaas/ [Accessed
in Jan 2013].
[6] D. Cappelli, A. Moore, R. Trzeciak, T. J. Shimeall, Common
Sense Guide to Prevention and Detection of Insider Threats,
3rd Edition, Version 3.1, CERT, January 2009.
[7] M. Schmidt, L. Baumgartner, P. Graubner, D. Bock, and B.
Freisleben,“Malware Detection and Kernel Rootkit
Prevention in Cloud Computing Environments,” in Proc.
19th Euromicro International Conference on Parallel,
Distributed and Network-Based Processing (PDP), 2011, pp.
603–610.
[8] D. Goodin, “Webhost Hack Wipes Out Data for 100,000
Sites”.Internet:
http://www.theregister.co.uk/2009/06/08/webhost_attack/,
[June 8, 2009].
[9] M.R. Abbasy and B. Shanmugam, “Enabling Data Hiding for
Resource Sharing in Cloud Computing Environments Based
on DNA Sequences,” in Services (SERVICES), 2011 IEEE
World Congress on, 2011, pp. 385–390.
[10] J. Feng, Y. Chen, D. Summerville, W. Ku, and Z. Su,
“Enhancing cloud storage security against roll-back attacks
with a new fair multi-party non-repudiation protocol,” in
Consumer Communications and Networking Conference
(CCNC), 2011 IEEE, pp. 521–522.
[11] Lori M. Kaufman, “Data security in the world of cloud
computing”, IEEE Security and Privacy Journal, vol. 7,
issue. 4, pp. 61-64, July- Aug 2009.
[12] Liu Hao, Dezhi Han, "The study and design on secure-cloud
storage system", Electrical and Control Engineering
(ICECE), 2011 International Conference.
[13] Qingni Shen, Yahui Yang ,Zhonghai Wu, Xin Yang, Lizhe
Zhang, Xi Yu, Zhenming Lao, Dandan Wang, Min Long,
"SAPSC: Security Architecture of Private Storage Cloud
Based on HDFS", Advanced Information Networking and
Applications Workshops (WAINA), 2012 26th International
Conference.
[14] Rohit Bhadauria, Sugata sanyal ,"A survey on Security
issues in cloud computing and associated migration
techniques".
[15] Sien. O.B, Samsudin. A, Budiarto. R, "A new image-
database encryption based on a hybrid approach of data-at-
rest and data-in-motion encryption protocol", Information
and Communication Technologies: From Theory to
Applications, 2004. Proceedings.
[16] O P Verma, Ritu Agarwal, Dhiraj Dafouti, Shobha Tyagi,
"Peformance Analysis Of Data Encryption Algorithms",
Electronics Computer Technology (ICECT), 2011 3rd
International Conference.
[17] http://h71000.www7.hp.com/doc/83final/ba554_90007/ch04
s02.html#cert3-fig
[18] http://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_ke
y_exchange
[19] http://www.ti.com/lit/an/swra077/swra077.pdf.
[20] Kaliski B, "A survey of encryption standards",Micro, IEEE.
[21] S. Hariri, X. Lizhi,C. Huoping, Z. Ming, S. Pavuluri, S. Rao.
"AUTONOMIA: An Autonomic Computing Environment",
In International Performance Computing and
Communications Conference., 2003, pp. 61-68.
[22] H. Chen, Y. B. Al-Nashif, G. Qu, S. Hariri: “Self-
Configuration of Network Security.” EDOC 2007pp.97-110.
[23] H. Chen, S. Hariri, and F. Rasal. “An Innovative Self-
Configuration Approach for Networked Systems and
Applications”. The 4th ACS/IEEE International Conference
on Computer Systems and Applications (AICCSA-06).
[24] http://h71000.www7.hp.com/doc/83final/ba554_90007/ch04
s02.html#cert3-fig
[25] http://blog.fpmurphy.com/2010/04/openssl-des-api.html
[26] T hiruneelakandan A, Thirumurugan T,"An approach
towards improved cyber security by hardware acceleration of
OpenSSL cryptographic functions", Electronics,
Communication and Computing Technologies (ICECCT),
2011 International Conference on 12-13 Sept. 2011.
[27] J. Black, M. Cochran, T. Highland, "A Study of the MD5
Attacks: Insights and Improvements".
[28] Schwartz, Mathew J. “Dropbox Accused of Misleading
Customers on Security.” Information Week. May 16, 2011.
http://www.informationweek.com/news/storage/security/229
500683.
[29] Cisco Visual Networking Index Report,
http://blogs.cisco.com/sp/ip-traffic-to-quadruple-by-
2015/#utm_source=feedburner&utm_medium=feed&utm_ca
mpaign=Feed%3A+CiscoBlogSp360ServiceProvider+%28C
isco+Blog+%C2%BB+SP360%3A+Service+Provider%29.
[30] Study Conducted in March by AFCOM -
http://www.afcom.com/communique/communique_1.html.
[31] Research by Ponemon Institute, Benchmark study of 41 US
datacenters,http://www.emersonnetworkpower.com/en-
US/Brands/Liebert/Documents/White%20Papers/sl-
24659.pdf.
[32] http://www.techvibes.com/blog/canadians-are-cautious-and-
conservative-when-it-comes-to-cloud-computing-2011-06-
07.
[33] www.futurecloudcomputing.net.
[34] Glynis Dsouza, Hemayamini Kurra, Hamid Alipour, Youssif
Al-Nashif, Salim Hariri , NSF Center for Cloud and
Autonomic Computing, The University of Arizona,
“Resilient Cloud Services”, Submitted to IEEE Transactions
on Network and Service Management, 2013.
[35]https://cloudsecurityalliance.org/topthreats/csathreats.v1
.0.pdf
[36] http://my.opera.com/securitygroup/blog/2009/09/29/512-bit-
rsa-key-breaking-developments
Retrieved : 2013-22-5
[37] http://en.wikipedia.org/wiki/SQL_injection