CSA Guide to Cloud Computing

In this excerpt of CSA Guide to Cloud Computing, authors Rai Samani, Brian Honan and Jim Reavis review cloud security threats based on research by the CSA's Top Threats Working Group.

CSA Guide to Cloud Computing cover

The following is an excerpt from the book CSA Guide to Cloud Computing by authors Rai Samani, Brian Honan and Jim Reavis, and published by Syngress. This section from chapter three discusses threats known as "The Notorious Nine" that hinder cloud security.

The Cloud Threat Landscape

Utilizing the cloud provides organizations with many business benefits, but with these benefits come a number of threats. Some of these threats are the traditional threats that we are accustomed to while others are unique to the cloud. By better understanding the various threats that can face our data and services in the cloud we are better prepared to determine how best to secure them.

Before examining the various threats, it is important that we first understand what a threat is. There are many different interpretations and definitions for threats in the context of computer security. The Oxford English Dictionary defines a threat as

(noun) (1) a stated intention to inflict injury, damage, or other hostile action on someone. (2) a person or thing likely to cause damage or danger. (3) the possibility of trouble or danger

In security fields we tend to focus on the second definition "a person or thing likely to cause damage or danger." However, we need to focus further into what exactly a threat is, particularly in relation to information security.

According to the International Organization for Standardization 27001 Information Security Standard, a threat is defined as

a potential cause of an unwanted incident, which may result in harm to a system or organization.

Under the Payment Council Industry's Data Security Standard a threat is described as

Condition or activity that may cause information or information processing resources to be intentionally or accidentally lost, modified, exposed, made inaccessible, or otherwise affected to the detriment of the organization.

The National Institute of Standards and Technology definition of a threat is given in SP 800-301 and defines a threat as

the potential for a threat-source to exercise (accidentally trigger or intentionally exploit) a specific vulnerability

While the above definitions seem to be more relevant to the information security, the definition supplied by the European Network and Information Security Agency (ENISA) probably provides the most apt definition, in particular when taking cloud computing into account.

According to the ENISA, a threat is

Any circumstance or event with the potential to adversely impact an asset through unauthorized access, destruction, disclosure, modification of data, and/or denial of service.

Having understood what a threat is, it is important to appreciate how threats against computer systems have evolved over the years. This is not just so we can better understand today's threats but also so we can appreciate that as computing technology evolves and our business and personal use of it also evolves, so too will the threats.

Evolution of Cyber Threats

Since we first started using computers they have been under threat. Those threats come from various sources whether they are from those with malicious intent, from well-intentioned people making mistakes, man-made failures such as power outages, or indeed natural disasters. As our use of computers and the Internet has grown over time so too has the number and the sophistication of the threats facing those systems.

In the early years of computing, the main source of threats against computer systems were mainly from internal threats such as disgruntled or unhappy employees, or from the well-meaning user who makes a mistake. The other threats faced by these systems were from natural sources or man-made sources such as hardware failures or software bugs. This low level of threats was due to many such computer systems being isolated from other systems outside their own organization's offices and buildings. As a result, the threats against these systems were mostly limited to those with physical access to those systems or from disasters in the locale.

Over time, access to these systems became more and more frequent with companies employing modems and wide area networks to allow remote offices and users to connect to them. While enabling remote users to gain from the benefits of these systems, it also opened up these systems to threats from external parties.

At this stage in the evolution of computing, the external threats posed to organizations' systems were restricted to mainly individuals who broke in and explored these systems out of curiosity to determine how computers, networks, and systems worked. In the main, there was no malicious intent in this type of activity with the primary motive being curiosity.

In the 1980s, we witnessed the introduction of personal computers and their subsequent growth not just in home use but also within corporate environments. Over time, and as a result of these developments, companies and organizations saw their staff becoming more and more productive as they moved from a centralized computing model to a distributed one. The growth in use of personal computers saw data being moved from being stored and managed on a central location onto individual computers located throughout organizations.

In parallel to this growth in the use of Personal Computers, there was also the growth in the use of the Internet. With the growth of the Internet, many organizations took advantage of its openness and global spread to enable them to promote their services, products, and their brands to existing and potential customers. Other Internet-based technologies also enabled workers to share information with others and to be more productive and effective.

All these new technologies brought many advantages to organizations and indeed to society and the economy in general. However, legitimate businesses and organizations were not the only ones taking advantage of these new technologies. Those with malicious intent also saw the opportunities in this brave new world.

CSA Guide to Cloud Computing: Implementing Cloud Privacy and Security

Authors: Rai Samani, Brian Honan and Jim Reavis

Learn more about CSA Guide to Cloud Computing from publisher Syngress

At checkout, use discount code PBTY15 for 25% off

In the early stages, the number of attackers looking for financial gain from stealing information from systems also started to increase. While the majority of online attacks still came from those with curiosity as their main motive, many others saw the Internet as a way to promote their political cause or other activism by attacking and disrupting systems to raise awareness of their cause, or by defacing an organization's Web site and posting their messages online.

The threat posed by those with looking to gain financially also increased as they looked to extort money from organizations by defacing their Web sites and extorting payment from them to stop their Web site from being defaced again, or by stealing information from their systems.

With the dawn of the twenty-first century, we saw an explosion in organizations rushing to store and transmit more and more data on their computer systems, we also saw a surge in the use of the Internet by organizations to promote and sell their products and services. As companies rushed to benefit from computers and the Internet so too did those with malicious intent. As the value of information grew and the ability to steal that information through insecure systems equally grew, we witnessed a change in the online criminals. No longer a niche arena for individuals, or small numbers of like-minded people, cybercrime now attracted traditional organized criminal gangs as they saw many new opportunities to make vast sums of money by exploiting weak computer security with relative low risk of being prosecuted.

This evolution in online threats was also mirrored by the growth in sophistication of computer viruses of the same timeline. The early computer viruses were not very sophisticated and were primarily designed to disrupt the operation of the systems they infected, often in amusing ways, such as the cascade and pingpong viruses. As these viruses were easily detected due to their disruptive nature, they could be eliminated with the appropriate security tools or by rebuilding the system. Today, however, most viruses are specifically designed to go undetected as their raison d'être is no longer to cause disruption. Instead, criminals create these viruses to go undetected on infected systems so they can be used to steal valuable data such as sensitive financial data, logon credentials to financial systems, or valuable information such as an organizations' intellectual property.

The modern computer virus is also designed not to just steal information but also to enable online criminals use infected computers in other criminal enterprises such as spending spam e-mails, infecting other computers, and extorting money from companies by using the infected computers under their control to take part in a distributed denial of service (DDoS).

Computer viruses are also being developed as advanced weapons to silently attack targets. The Stuxnet virus is a prime example of how a computer virus can be used to silently disrupt the operations of critical target. We will no doubt see further advances in the complexity and capabilities of computer viruses in the future.

As our use of computer systems has evolved so too have the threats facing those systems; moving to the cloud is just one more evolution in our use of computers, networks, and applications and while the traditional threats facing those systems still remain, there will be other threats that will evolve specifically against cloud computing.

Knowing and understanding what these threats are will make it easier to develop strategies, solutions, and systems to counter and manage those threats.


Cited as the number one security threat for cloud computing, data breaches refer to the loss of confidentiality for data stored within a particular cloud instance. It is of course worth noting that such a threat is likely to exist even within an on-premise solution, or traditional outsourced solution.

The concern over the loss of confidentiality is entirely understandable, as the potential financial and reputational cost can be significant. This will be entirely dependent on the data that have been stolen; organizations will have many types of data ranging from intellectual property and sensitive business information to personal data (e.g., customer data). For personal data, according to the "2013 Cost of Data Breach Study" conducted by the Ponemon Institute, a data breach (referred to as the theft of protected personal data) can cost up to $200 per record. This cost is entirely dependent on the country in which the surveyed company resides, and as depicted in Figure 3.1.

In terms of deriving the cost per record, costs were divided into two categories, direct and indirect. Direct costs are those that refer to "the expense outlay to accomplish a given activity such as engaging forensic experts, hiring a law firm or offering victim's identity protection services. Indirect costs include the time, effort and other organizational resources spent during the data breach resolution." Dependent on the country in which the surveyed company resided, the costs varied in terms of direct versus indirect. For example, companies surveyed in the United States experienced 32% direct costs compared with those in Brazil where direct costs rose to 59%. According to insurance company Beazley in their small business spotlight, the greatest direct cost associated with responding to a data breach is the notification required. This of course is more relevant to those businesses that have a requirement to notify affected customers. In the United States, for example, and as of the time of writing, and according to Bloomberg Law there are only four states without a data breach notification law; these are Alabama, Kentucky, New Mexico, and South Dakota. However, the data notification requirements across the various states do differ, with varying requirements such as notification triggers and method of notification.

Estimated cost of breach per record (in USD).
Figure 3.1: Estimated cost of breach per record (in USD).

Now of course, the United States is not the only country where data breach notification laws exist; under the European Union's Regulation on the notification of personal data breaches, providers of publicly available electronic communications services are obligated to notify customers about data breaches. This notification must be done within 24 h to the national competent authority. Moreover, impending legislation, in particular in the European Union, is likely to increase the notification requirements for organizations that experience a data breach.

Notification is one cost associated with data breaches; however, as recent public data breaches have demonstrated, those affected companies have many other costs to contend with, and these may be either direct or indirect. Additional costs can include direct technical costs to identify the cause of the breach, and any remediation work to close vulnerabilities and prevent the issue from reoccurring. In addition, there are likely to be costs associated with the breach itself, such as the potential loss of business. Following the 2006 data breach experienced at the TJX Corporation in which $45 million credit and debit cards were stolen, it was reported that the retailer had faced costs of over $256 million (these figures do vary greatly dependent on source; therefore, the more conservative figure is quoted here), despite initial estimates attributing the costs at a "mere" $25 million. While this level of data breach is certainly at the higher level of examples, it does provide an illustration of the impact an organization faces when experiencing a data breach, and subsequently validates the reason why it is rated as the number one concern when migrating to cloud computing. A large proportion of the costs from the TJX breach was related to the offer of services to its customers; this included credit monitoring services as well as identity theft protection. A breakdown of the estimated costs, and associated activities were presented in an article published by Wired in 2007; while the actual figures in Table 3.1 may be disputed, it does provide an insight into the associated costs related to a data breach.

Read an excerpt

Download the PDF of chapter three to learn more!

What these figures, or rather what these activities, clearly demonstrate are that the costs associated with a data breach can be significant, and any potential breach is quite rightly seen as a major concern. In addition, it is worth noting that some of these figures seem low and therefore it is assumed they are per record (e.g., cost per call is $25, but is likely per customer). From a cloud perspective, it is worth noting that as the risk is not outsourced, the remediation costs will be borne by the customer and not the provider. As discussed in Chapter 7, the data controller will almost always be the end customer and therefore they will be responsible for ensuring that not only is the appropriate due diligence undertaken but their own customers (data subjects) will look to them to remedy the situation. It may be possible to point the finger at a provider, but the truth is that the data subjects (whose records have been stolen) are not direct customers of the cloud provider and their decision to no longer work with the company they trusted to look after their data will affect the bottom line of the data controller. This is referred to as the abnormal churn rate, which can be as high as 4.4% dependent on geography and likely sector.

Table 3.1
Table 3.1

Small caveat to the above statement: the provider could also experience a loss of trust if the breach is significant and public enough to negatively impact the trust of other customers, both potential and/or existing.

Other types of data can also have a significant financial impact. Research conducted by the Centre for Strategic and International Studies identifies the following categories in its report entitled "Economic Impact of Cybercrime":

  • Intellectual property: "The cost to companies varies from among sector and by the ability to monetize stolen data (whether it is IP or business confidential information). Although all companies face the risk of loss of intellectual property and confidential business information, some sectors -- finance, chemicals, aerospace, energy, defense, and IT -- are more likely to be targeted and face attacks that persist until they succeed." From a cloud perspective, while personal data will demand due diligence, the hosting of data classed as intellectual property should be commensurate to its value. This should include not only the cost of the research, but also the opportunity costs such research represents to the business.
  • Financial crime: "Financial crime usually involves fraud, but this can take many forms to exploit consumers, banks, and government agencies. The most damaging financial crimes seek to penetrate bank networks, with cybercriminals gaining access to accounts and siphoning money." The migration of cloud services, particularly for financial services will witness greater focus from nefarious actors looking to commit fraud by targeting systems hosted by external providers. This renewed focus was reported by CNBC when "cybercriminals acting in late 2013 installed a malicious computer program on the servers of a large hedge fund, crippling its high-speed trading strategy and sending information about its trades to unknown offsite computers." Admittedly, these types of attacks are not solely targeted at cloud computing, but demonstrate the threat landscape for financial fraud involves malicious actors that are very technically adept and well resourced.
  • Confidential business information: "The theft of confidential business information is the third largest cost from cybercrime and cyberespionage. Business confidential information can be turned into immediate gain. The loss of investment information, exploration data, and sensitive commercial negotiation data can be used immediately. The damage to individual companies runs into the millions of dollars."

The loss of confidentiality for an organization can have a significant impact regardless of whether the data are hosted externally or are an internally provisioned service. Using cloud computing can have enormous efficiency gains, but as the example of Code Spaces (more detail under Data Loss) demonstrates, the need for security remains and indeed one can argue that with the volume and complexity of threats increasing the need for security has never been more important. Ultimately, the loss of confidentiality will impact the cloud customers significantly, and also be to the detriment of the provider.

Data Loss

Unlike data breaches, loss of data refers to the unavailability of data stored within the cloud for the end customer. We touched on the subject briefly in the first chapter using MegaUpload as the example; however, the legal status of the provider is only one example that may potentially impact the service.

Provider Viability

What do you do when our cloud service provider (CSP) goes bankrupt? This was a question that customers of Nirvanix faced when they were notified they had 2 weeks to migrate their data. Posted on their Web site on September 30, 2013, customers were advised they had until the 15th of October to ensure their data had been removed.

Two weeks. It is hardly a sufficient time frame to analyze alternate providers, conduct due diligence, and then implement a migration plan despite the company providing a list of recommendations. Indeed, reports suggest that the provider had many customers with over a petabyte of data and while official notice was provided it was a full 10 days after the reports began to appear in the mainstream press.

Recognizing the impact of a provider going bankrupt has led to the introduction of legislation that allows the end customer a legal right to claim back data from a bankrupt provider. Introduced in July 2013, the European country Luxembourg introduced Article 567 p2, of the Code of Commerce. This allowed the end customer the opportunity to recover "intangible and nonfungible movable assets" under the following conditions:

  • "The bankrupt company must not be the legal owner of the data but only hold it;
  • The claimant must have entrusted the data to the bankrupt company or be the legal owner of the data;
  • The data must be separable from the other intangible and nonfungible movable assets of the company at the time of the opening of bankruptcy proceedings."

The associated costs of the recovery of the data will be the responsibility of the claimant; therefore, while the law provides the means to recover data the cost of recovery will need to be factored in. Although of course the law does include cloud computing providers, its scope is considerably wider and includes any third parties entrusted with customer data. Although a significant legal document, its global scope is limited to Luxembourg; however, it serves as an indicator that the legal framework is focusing attention on the viability of providers.

Insufficient Disaster Recover (DR)/Business Continuity Planning (BCP) Practices

The benefit of migrating to the cloud is that defining the level of availability is as simple as a line entry in the contract. This really sounds simple does it not? By stating availability as 99.999% and then sitting back to use the service safe in the knowledge that the likelihood of the service going down is so unlikely (because there is the safety of a sentence in the contract). Sadly the reality is very far from this perfect world.

What happens when the service level agreement regarding the availability of service is not met? Invariably, the response as defined within the contract results in credit being issued to the end customer. Depending on the provider this is likely to be a tiered model, with greater compensation/credit being provided depending on the amount of downtime experienced. While receiving credit may be an appropriate level of compensation for many provisioned services, for many customers getting 10% credit for an hour's downtime may not compensate the loss of service. This loss of service itself is most likely to be a result of a power outage, according to recent research. Of the 27 publicly reported outages in 2012, the main cause for the outage was power loss, as depicted in Figure 3.2.

What was particularly interesting within the research was that the average time to recover the services from the outage was 7.5 h, and the examples used within the research used some of the biggest names in cloud computing.


While the examples of malicious actors involve a conscious decision to affect the availability of services, not all actions are a direct result of someone malicious intentionally looking to impact the availability of a paid service. There could be something as simple as a human error, for example, an operator inadvertently deleting something or powering down an important asset. While the action may be an accident, the result is likely to be the same, namely, the unavailability of data to the end customer.

Such an example was reported by ZDNet in 2011, whereby a software bug resulted in the deletion of customer data. The status page from Amazon Web Services (AWS) at the time reported the following:

Research into reasons for cloud outage.
Figure 3.2: Research into reasons for cloud outage.

Independent from the power issue in the affected availability zone, we've discovered an error in the EBS software that cleans up unused [EBS] snapshots…During a recent run of this EBS software in the EU-West Region, one or more blocks in a number of EBS snapshots were incorrectly deleted.

The power issue in the status update refers to lightning that impacted European operations of AWS. Despite the power issue, the software bug resulted in a number of customers being without access to their data for a period of time. While the issue itself was not malicious the net result would appear to be exactly the same.

While these examples of potential threats to a cloud service can be mitigated by employing a secondary service, or with the requisite assurance that the provider employs sufficient business continuity practices, such costs should be factored in. Therefore, the cost presented by the provider is unlikely to be the total cost of ownership for the provision of an outsourced solution. Equally, the aforementioned examples are only a small snapshot of some of the reasons for data loss, one glaring omission are the actions of malicious actors, or "hackers" if we adopt the media definition. The recent case of Code Spaces provides a stark warning to organizations looking to leverage cloud computing without implementing the appropriate level of security. In June 2014, it was reported that the company was "forced to close its doors after its AWS EC2 console was hacked." The company faced a DDoS attack in June 2014, and also an "Amazon Web Services (AWS) Elastic Compute Cloud (EC2) console and left messages instructing the company's management to contact them via email." Although they were able to change their passwords, the intruder leveraged backup accounts created in the intrusion. However, the "hacker removed all Elastic Block Storage (EBS) snapshots, Simple Storage Service bucks, AMIs, and some EBS and machine instances. Most of the company's data, backups, machine configurations and off-site backups were either partially or completely deleted, leaving Code Spaces unable to operate." Of course, this particular example could have applied to internally provisioned, just as easily as to those hosted with a CSP. However, as Nathan McBride, Chief Cloud Architect for AMAG Pharmaceuticals, puts it, "if you're going to put your eggs in the AWS basket, you have to have the mechanisms in place to really solidify that environment." To be fair, this statement could be made about any cloud provider.

About the authors:
Raj Samani is an active member of the Information Security industry, through involvement with numerous initiatives to improve the awareness and application of security in business and society. He is currently working as the VP, Chief Technical Officer for McAfee EMEA, having previously worked as the Chief Information Security Officer for a large public sector organisation in the UK and was recently inducted into the Infosecurity Europe Hall of Fame (2012). He previously worked across numerous public sector organisations, in many cyber security and research orientated working groups across Europe. Examples include the midata Interoperability Board, as well as representing DIGITALEUROPE on the Smart Grids Reference Group established by the European Commission in support of the Smart Grid Mandate. In addition, Raj is currently the Cloud Security Alliance's Strategic Advisor for EMEA having previously served as the Vice President for Communications in the ISSA UK Chapter where he presided over the award of Chapter Communications Programme of the Year 2008 and 2009, having previously established the UK mentoring programme. He is also on the advisory council for the Infosecurity Europe show, Infosecurity Magazine, and expert on both searchsecurity.co.uk, and Infosec portal, and regular columnist on Computer Weekly. He has had numerous security papers published, and appeared on television (ITV and More4) commenting on computer security issues. He has also provided assistance in the 2006 RSA Wireless Security Survey and part of the consultation committee for the RIPA Bill (Part 3).

Brian Honan is recognized as an industry expert on information security, in particular the ISO27001 information security standard, and has addressed a number of major conferences relating to the management and securing of information technology. Brian was a founding member of the Irish Corporate Windows NT User Group and also established Ireland's first ever national Computer Security Incident Response Team. He is a member of the Information Systems Security Association, Irish Information Security Forum, Information Systems Audit and Control Association, and a member of the Irish Computer Society and the Business Continuity Institute. Brian's previous publications include The Cloud Security Rules, ISO27001 in a Windows Environment, and Implementing ISO27001 in a Windows Environment.

Jim Reavis is the Executive Director of the CSA, and was recently named as one of the Top 10 cloud computing leaders by SearchCloudComputing.com. Jim is the President of Reavis Consulting Group, LLC, where he advises security companies, large enterprises and other organizations on the implications of new trends and how to take advantage of them. Jim has previously been an international board member of the ISSA and formerly served as the association's Executive Director. Jim was a co-founder of the Alliance for Enterprise Security Risk Management, a partnership between the ISSA, ISACA and ASIS, formed to address the enterprise risk issues associated with the convergence of logical and traditional security. Jim currently serves in an advisory capacity for many of the industry's most successful companies.

Dig Deeper on Cloud Computing Security Issues: Incident Response - Data Breach Prevention