Scratch the surface in any organization and you’ll find the legacy environment is one of the most challenging issues...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
facing IT in that organization. No matter how well planned the IT strategy, how efficient the operations, or how disciplined IT processes, there will always be technology that can’t be replaced and that doesn’t meet current standards. Because of the way many legacy applications were built, their criticality, and the expense to modify them, many organizations become “locked in” to some painful maintenance and support scenarios.
Given this, it’s no wonder organizations would be optimistic when it comes to using cloud computing to modernize, consolidate and virtualize legacy applications. But security here is critical, more so perhaps than in any other aspect of cloud migration because assumptions made about security when the application was originally developed and implemented may no longer hold under the new paradigm. Legacy application migration to the cloud can be dangerous without proper planning and foresight.
Legacy application migration and the cloud
Before we can understand why security is so important in this effort, it’s important first to understand what constitutes a “legacy” application and how that application might intersect with cloud. Oftentimes, IT professionals associate legacy solely with mainframe (technologies like CICS, VSAM and COBOL), but in reality there’s way more to it. At its most generic, legacy refers to any technology that is both difficult to replace and would be implemented using different technologies if deployed today. So anytime you’re supporting technology (be it hardware, applications, middleware, programming languages, etc.) that predates current technology standards, that technology is legacy.
This definition implies the term is relative: What is legacy to one organization could be standard operating procedure to another. It also means legacy can encompass just about any technology: a client/server application written in PowerBuilder; a mainframe application written in COBOL; Java applications using CORBA; even Web applications using technologies like CGI or NSAPI/ISAPI.
Because legacy can be any technology, it also means virtualization may directly “touch” any number of legacy application components. In other words, legacy components could intersect cloud in a few different ways. It could include modernization of legacy mainframe applications and repositioning them as cloud-based services, but it could be a factor in migration of more easily accessible server platforms as well: legacy Web servers, n-tier application servers, middleware or client/server applications. In fact, you may already be moving much of this technology to the cloud as part of direct physical to virtual migration and data center consolidation activities. This is why security it is so important, because migration of these components can happen whether or not you notice and plan for it specifically.
Security ramifications of legacy application migration
From a security perspective, the importance of assumptions made during the application development and implementation process cannot be overstated. These are so critical because the context in which the application might be hosted and used post migration could vary from how the original application was deployed. Without analysis of the application -- and security specifically -- as part of that process, there’s no way to tell which changes will impact security in a negative way.
For example, a legacy Web application may have been originally designed and architected on the assumption that it will be hosted in a dedicated, on-premises infrastructure. Developers may have relied on this assumption in developing the security model. They could have, for example, architected the application to pass data between tiers in plaintext; they may “trust” (i.e. fail to authenticate) interactions between components (because after all, the traffic is all internal). But when you relocate that application to a multi-tenanted environment in a service provider’s off premise data center, you violate the security model and introduce risk, because now it’s not all internal. .
There are other ways security can be negatively affected. Keep in mind that because legacy applications leverage technologies that are no longer the standard for the organization, deep expertise in the particular technologies in use can be hard to come by. This means the personnel who are designing the network architecture, building out the security controls, and establishing the operating procedures around those controls, may not be fully steeped in the nuances of the technology in use. Security-critical decisions specific to the way the applications (and supporting technologies) work might be recognized right away by someone with deep expertise in that area, but may not be fully realized when that expertise is not present. This has the potential to create security “blind spots” in planning, or a failure to implement controls key to the security of the application.
Because of these factors, it’s important that organizations specifically plan ahead for how they will approach security for legacy application migration to the cloud. Ideally, to avoid situations of this type, we would evaluate the security model, implementation and architecture of each application prior to a migration. We’d validate the migration approach via a standardized, formalized method to make sure we’re not missing key controls and that the new context is appropriate given how the application works. In this ideal scenario, a formalized process such as Microsoft Threat Modeling could be an appropriate tool to evaluate the application, its risks and what controls are required to make sure the security model is appropriate.
However, this approach can be costly, particularly if large-scale migrations that cover dozens of datacenters are involved. If a formal evaluation isn’t feasible, a streamlined approach can be used instead. For internally developed applications, design artifacts like data flow diagrams and Unified Modeling Language (UML) artifacts (e.g., Component or Deployment diagrams) can be helpful to validate architectural decisions and look for potential problematic areas. For applications that were supplied by a vendor, reference documentation may be appropriate to review. The goal is to understand the application’s security requirements and how those will be addressed post migration. Should time and budget allow it, consider application-specific security testing and review as you migrate each application; if you have application security expertise in house, consider leveraging it now to validate that security isn’t undermined by migration efforts.
Moving to the cloud can save the organization significant dollars over the long term and legacy applications are certainly something that can be included in that effort. But it’s very important for the security organization to be involved, because as we learned from Y2K and applications that list years in two digits, assumptions matter.
About the author:
Ed Moyle is a senior security strategist with Savvis as well as a founding partner of Security Curve.