The common assumption is that private cloud is the most secure because security control operations are directly...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
in the hands of the organization. But, like everything, the reality is more complex; i.e., it creates security challenges in addition to benefits.
The biggest challenge stems from reductions in barriers to creating and changing production virtual images. Consider a private cloud environment’s evolution over time: Employees create “one-off” images to meet critical dates or assist in QA, resulting in VM sprawl, which is detrimental to security due to semi-documented virtual images that may persist indefinitely. Rapid image re-purposing also can lead to inappropriate use -- for example, using a development image as the baseline for a production application.
Now, these problems aren’t new; server sprawl and configuration challenges existed in the legacy world of physical data centers. The difference is that in private cloud, “gates” have gone away. In a legacy data center, the need to purchase hardware gated or controlled deployment; in public cloud, the need to interact outside the organization (or pay per image) also slows down expansion. In private cloud, the only gating factors remaining are storage and processing power -- a near-infinite ceiling.
Advice on prevention of these issues usually consists of a call for discipline on the part of the organization. But for security organizations that understand no prevention strategy is ever 100% effective, detection as well as prevention is important. Let’s look at ways organizations can control VM sprawl by detecting inappropriately configured or “rogue” images as well as inappropriate use of images.
Locating rogue images
In any virtualization deployment, images pop up like mushrooms and change quickly, many times in a controlled and legitimate manner. The goal isn’t just to identify that something changed; it’s to identify inappropriate change by defining what’s supposed to be and comparing that to what is.
Now, it’s easy to get a list of images -- every hypervisor on the market will provide that by default. The hard part is the “knowing what’s supposed to be there” piece. Because of the need to know what’s normative, using stock hypervisor inventory features can be challenging. You need to know more than just what images exist; to find rogues you need to know what images are supposed to exist and you want to know how they should be configured.
There are a few strategies that work here, but the most effective combines discovery capability with tools that perform asset management and inventory tracking. If you own something that does this already (chances are you do if you’re migrating from a traditional data center), ideally leverage those tools, for example existing inventory/discovery software such as IBM Tivoli and SolarWinds Orion.
But since cloud deployments are usually driven by cost savings, it’s not a guarantee that you’ll be able to pay for those kinds of commercial tools, so it’s useful to have a few free alternatives. Spiceworks is free, easy to use, and has discovery capability for network (built-in) and for virtual images (via a tool). Remember that network discovery only finds live, responsive hosts, so you’ll probably also want to use both sets of discovery capabilities to be thorough.
Also free (but requiring a bit more effort to configure and use) is the open source FusionInventory, which includes SNMP, NetBIOS and IP discovery (to find “live” images), and also provides data on turned-down virtual machines via the agent and through extensions. Configuring this can be challenging, but there’s a pre-configured virtual appliance available that’s helpful to get a live configuration up and running (though it’s not recommended for production deployment).
Finding inappropriate use
Finding rogue and poorly configured images is great, but what happens when an appropriately configured image of a particular type (say, “QA WebLogic Server”) is used for an entirely different purpose than originally intended (say, “Production Payment App”)?
Historically, this problem has been very difficult to solve. Many are watching evolution in the hypervisor space -- for example the addition of data loss prevention to VMWare’s vShield App 5 -- that promises to make finding rogue data more manageable, but since that a) costs money, and b) is “future state” for most of us, the onus is there in the meantime to find and control the data in the short term. One strategy for that is data loss prevention (DLP).
Much has been said about DLP before and during a cloud migration, but I’m talking about the specific case of DLP on the image -- after the move -- as a temporary, interim stopgap for finding production data on test/development images. You can do this by incorporating a DLP agent into test and development baseline images. If you have DLP in house already, you can use that; if not, a free alternative like OpenDLP or MyDLP can work to monitor inbound and outbound data streams for credit card numbers, Social Security numbers, as well as customized user-supplied regular expressions. Both have virtual appliances that can be set up and put into production quickly.
By keeping one eye open for inventory changes and the other on changes to where data resides, organizations can address some of the security challenges associated with VM sprawl in a private cloud.
About the author:
Ed Moyle is a senior security strategist with Savvis as well as a founding partner of Security Curve.