ra2 studio - Fotolia
It's a truism that real-world environments in today's business climate are becoming more complex, more connected...
and, as a result, are more challenging to secure than they've ever been. There are a few different things going on that make this the case -- all of which are happening near-simultaneously in many organizations.
First, the data center itself is in a period of transition. The paradigm of the software-defined data center allows abstraction of everything from networking to storage to computing resources with the datacenter. Workloads can now be seamlessly moved, dynamically provisioned in response to anything from business need to economic factors, and optimized in any number of extremely flexible ways. All of this can happen with very little lead-time required to effect specific changes.
Likewise, organizations continue to embrace the cloud. This adds another layer of complexity as third-party resources via infrastructure as a service may be brought to bear in response to dynamic conditions. Lastly, add to this already complicated mix the ravenous adoption of containerization technologies, such as Docker and Rocket, both in the data center and through cloud services, and the security pro is left with an extremely complicated soup of cloud workloads and data that they need to somehow figure out a way to secure.
As with anything, the law of unintended consequences applies to this tremendous flexibility and fluidity. In the past, many security programs generally -- and individual countermeasures specifically -- have relied on contextual information to determine where and how to best implement countermeasures. For example, an organization with PCI DSS compliance in scope might implement certain controls where cardholder data is stored/processed/transmitted but not in other locations. Another organization might base decisions about where to place intrusion detection system sensors, network taps, proxies or other security-relevant technologies on the basis of where components reside architecturally.
What happens though, when the underlying substrate becomes more fluid? When contextual assumptions are mutable -- changing rapidly and programmatically in response to dynamic conditions? Saying it can get confusing and challenging to maintain is an understatement.
Microsegmentation and cloud workloads
One strategy that can offer potential advantages in this situation is the idea of microsegmentation. The premise of microsegmentation is to allow the definition of abstract logical groups that govern the application of security policy for specific cloud workloads. In other words, an organization can assign a cloud workload to an abstract group and choose how security policy is applied to, and enforced in, that group. This means that security policy is sticky regardless of where the individual workload is moved and what's located around it. Through doing this, one can allow the function of the workload instead of architectural context to dictate connectivity (i.e., what can talk to what), the application of network-based countermeasures like intrusion detection systems or even the application of hypervisor-integrated security tools such as vulnerability scanning.
There are a few direct benefits of this. First, by tying the enforcement of security policy to the workload itself and by moving enforcement as close to it as possible, the security controls can persist and remain relevant regardless of changes made dynamically underneath it. Second, it means that should the perimeter of the infrastructure as a whole be compromised, an additional layer is in place to help potentially mitigate attacks on cloud workloads -- it also can potentially minimize lateral movement within the infrastructure by the attacker should a compromise occur. Lastly, it means that security-relevant changes can be made dynamically in light of changing events. For example, if a new risk should arise, a new threat vector emerges or other changes occur that impact a certain subset of the environment, changes in policy can be made and applied quickly as a response measure.
Preparing the way for microsegmentation
The benefits that microsegmentation of cloud workloads has from a security point of view can be compelling. And, in fact, this paradigm can serve as a valuable tool in a security professional's toolbox. Most security pros can point to numerous instances where best practices recommend the use of internal segmentation, such as the creation of security zones, on the internal network -- but those who've actually attempted to implement this know it's challenging to set up and maintain.
As a practical matter in actually implementing microsegmentation, there are a few things to consider. First, it takes some preplanning and analysis. Specifically, it's important to have a detailed understanding of the role, purpose and function of what enterprises have in place. This includes critical business applications -- specific workloads -- and how they interact with each other. Not only is this important as a reference point to create the requisite groups to attach security policy to, but it's also important because nothing guarantees negative impact to the business, such as potential downtime, like making guesses about which critical business applications require connectivity to other systems. Meaning, to implement microsegmentation well, security teams need to either have already -- or be able to -- use tools to collect and analyze detailed information about traffic and usage, including important connections between nodes.
Additionally, it is helpful to account for technologies higher up on the stack than the level of a virtualized OS instance. For example, consider a virtual image running an instance of the Docker engine. There are two levels of abstraction at work here: the virtual OS and the Docker container. The segmentation rules that security teams put in place need to account for the applications running in the containers there right now (e.g., connectivity requirements), but knowing that other containers might run there in the future can save some pain down the road. For example, an organization might choose to couple information about segmentation methodology with the decision process for deciding what containers run where.
Lastly, many recommend a zero-trust approach in implementation. This means implementing the most restrictive constraints for connectivity, permitting only -- on the basis of thorough and iterative analysis -- the critical and necessary connectivity between hosts that require it for necessary and approved business-supporting activity.
Learn how to mitigate latency and outages in hybrid cloud workloads
Find out how to secure a cloud workload travelling between CSPs
Read how cloud consolidation can improve security in private clouds