robybret - Fotolia


Zombie cloud infrastructures haunt enterprise security teams

Instances created, but then forgotten can cause zombie cloud infrastructures that threaten the security of enterprises. Expert Frank Siemons discusses how to handle these systems.

One of the most significant benefits of using cloud instances over traditional network configurations is that, with the click of a few buttons, a cloud instance can be literally setup within seconds. This ability has dramatically reduced deployment times for test, model and production systems. It also enables greater flexibility both from a technical and a billing perspective.

Although it is quick and easy to deploy new systems, it is not so simple to decommission existing systems. Within a large organization -- which is usually more risk-averse -- there needs to be a guarantee that the system has no purpose now or in the future before the virtual plug can be pulled. It takes a lot of effort to get this guarantee on paper. No one wants to be responsible for turning off a well-running system that, for instance, executes a critical monthly report or task.

This has created a relatively new issue, which carries some serious security concerns with it: The zombie cloud infrastructure. Zombie cloud infrastructure means systems that are only still in place because it is safest to just leave them running.

Why is a zombie cloud a security concern?

Maintenance on these systems might have stopped because they are no longer considered relevant or are no longer in production. They are slowly forgotten about, and if they were only used for a very short period, their existence might not even be fully documented.

This can be a security professional's worst nightmare. These systems could be servers, but they could also be virtual firewalls, switches or entire model environments containing all these asset types.

If these zombie cloud systems are no longer maintained, then they are no longer patched against the latest security threats. If, for instance, the next Shellshock type vulnerability is discovered and exploited, who makes sure the latest patches released to protect a server against it are deployed? Is there even any visibility into the fact that these systems are unpatched if they are not maintained?

A regular internal vulnerability scan could assist here. Scanning an entire IP range might show some undocumented and unpatched systems. These systems can then be decommissioned, or can at least be patched adequately until their purpose is discussed internally.

Not all these systems have a current security event feed into a monitoring platform. This is especially the case for test and model systems that were only meant to operate for a short period of time. Unfortunately, an antivirus agent or a host-based intrusion detection system are usually not the first applications to be installed within a test environment.

The lack of visibility from a security perspective means that, if compromised, an attacker could use such a system as the perfect jump host or backdoor to further pivot into the compromised network. The potential lack of patching mentioned earlier further increases this risk.

Network-based security devices should pick up some of the noise produced by an attack on these systems. A perimeter next-generation firewall or network-based intrusion detection system could, for instance, pick up suspicious traffic traversing the network to and from such a target, regardless of the lack of host-based security monitoring.

The remedies for this cloud issue are not new. They were nearly the same 20 years ago when all infrastructures were purely physical.

Another issue these systems introduce is the potential existence of unused confidential data. What data is present in these systems, who has access to it and who still uses it? These questions are not only critical aspects in the decision to decommission a redundant system, but they also have a detrimental impact on security while the system is still operational. There could, for instance, be personally identifiable information on these systems. In a secure environment, all data and access to that data should be justified.

Another example of a zombie cloud infrastructure could be a file server migration where user data was moved from an old server to a new server. If the old server is never decommissioned, access permissions between the old and the new server data might no longer be synchronized, creating a security issue if the old data is somehow still available.

What to do about the zombie cloud

There are a few options to limit the impact of an existing zombie cloud infrastructure within an organization. One approach is to scan the internal -- and perimeter, if needed -- network for unidentified systems and services. Another prevention method could be a comprehensive set of processes and guidelines covering subjects such as deployment, change management, documentation and decommissioning. Finally, a combination of perimeter and internal traffic analysis tools, such as intrusion detection tools and next-generation firewalls, should be able to identify suspicious traffic regardless of its source and destination.

The remedies for this cloud issue are not new. They were nearly the same 20 years ago when all infrastructures were purely physical. The reduction in deployment and operational costs and the elimination of aging hardware and service contracts, however, have only increased the need to adhere to these already existing security best practices.

Next Steps

Find out how to choose the best public cloud instances for your organization

Learn how to tell when you need bigger cloud instances

Read about other challenges with cloud systems management

Dig Deeper on Cloud Network Security Trends and Tactics