In parts one and two, we opened the cloud computing security discussion and explored connections to the cloud....
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
In this section, we take a closer look at routing and DNS security threats, and how failure to secure them properly can wreak havoc on reliable connectivity, as well as open vulnerabilities in the cloud computing environment.
DNS security threats and cloud computing
When we discuss both routing and DNS, we must consider these services as they pertain to the client-cloud connection, the inter-cloud connection and the cloud-to-cloud connection, because vulnerabilities can occur differently depending on the connection type.
Most of us are familiar with the service that domain name service (DNS) provides. When humans manage the client in a browser scenario, for example, a domain name (ex. www.securitycurve.com) rather than an IP address (ex. 192.168.111.1), it is a lot easier to remember. DNS handles the translation of the name to a machine-usable IP address. Another benefit of using DNS is greater flexibility. Multiple servers can be set to resolve to a single name for load balancing purposes, and translation and routing proxies allow entire companies to resolve to a single IP address on the Internet while using protected, non-Internet-routable addresses internally.
The downside to using DNS is that the name translations and associated routing rules must be trusted by the client. If a user enters www.amazon.com, or an application is configured to call www.amazon.com, it is assumed the IP address that is ultimately returned is correct. However, if the client calls the server by name, the client runs the risk of being spoofed and rerouted to an evil twin cloud. For this reason, when using cloud computing services, use of the IP address for the call is recommended for connections that require the highest surety.
However, using IP addresses is not always feasible; thus, implementation of DNS security, such as DNSSEC, is recommended for domains participating in the cloud computing environment. With DNSSEC, both domains are digitally signed and must have exchanged DNSSEC public keys. DNSSEC mitigates DNS tampering and raises the trust level. One drawback to DNSSEC is that the domains must be DNSSEC compliant and the clients must also be DNSSEC aware.
DNS security threats: Why the path matters
Now that we've determined the endpoints, we must now consider the actual path. In this particular area there are no guarantees. Even if the client site and the server site have taken various security precautions, and the IP address-to-name mappings are resolved through DNSSEC, the routes can still present problems. The routing path may traverse hostile territory. Routes can be manipulated to present distant rogue collection sites as neighboring peer sites. In some cases, the goal is to prevent processing from happening, while in other cases, observation may be the end goal.
To prevent processing, data can be dropped or rerouted before it reaches the destination. This is more subtle than a DoS because there is no active flood of packets to the target, and the client is unsure why the connection or transaction never completed. Reroute/data drop attacks are particularly effective against time- and latency-sensitive applications like voice and video, because even short or intermittent service disruptions can render the service unusable.
If a front-end server is deployed by the cloud, the next issue to consider is how the traffic is being distributed to the server farm. In an "anycast" deployment scheme, a pool of addresses is rotated in order to load balance traffic. This provides a more robust solution than does the DNS round robin configuration. However, the DNS round robin method is free and easy to deploy. Therefore, it is important to discern how the cloud computing service provider is actually distributing the load. In most cases, anycast should be the load balancing method utilized.
Once the IP address of the cloud computing server is chosen, routing becomes an issue again. Unlike the client-to-front-end connection, however, this connection is handled transparently between applications, machines or networks. This is significant because the server farms are geographically distributed over a public infrastructure. The wider scale greatly increases the routing vulnerability.
Additional cloud computing attacks
The infrastructure that exists between the client and the cloud computing server, as well as the cloud computing server infrastructure itself, presents opportunities for other types of specifically targeted attacks. For example, DoS and DDoS attacks against either the cloud computing service provider or the cloud computing client. DoS attacks can be launched at the cloud computing service provider when the client may be the real target. Within the client server farm infrastructure, one server may become compromised and other good servers may suddenly become busy, resulting in anycast directing cloud computing requests to the rogue server and preventing legitimate connections to the client servers. If the cloud computing service provider is the target, all client servers within the provider's infrastructure could be rendered unavailable by flooding the cloud computing environment's entry points with connection requests.
The problem is exacerbated by the geographically distributed nature of cloud computing. When the server farms span multiple data centers and nations, there are no guarantees that each server is configured and managed identically. Though centralized policy management is available for massively distributed environments, it is not always implemented correctly; few customers demand proof of how policy conformance is implemented by their CC providers. Furthermore, if any server were to be compromised, the company policies must be reconciled with the host nation's cyber laws.
Robust centralized management of server configurations can help prevent DoS and ensure all servers in the cloud respond according to the same accepted baseline. Moreover, there are network and perimeter devices available to detect malicious traffic, such as DoS floods, that will mitigate this type of attack and ensure legitimate traffic continues to be processed.
Requests and data travel paths are invisible to the average user. Most enterprises trust the correctness of these paths and accept the general assumption that if a connection is made, it is a reliable one. Attackers can misuse this trust by misdirecting connection requests to rogue servers and clouds.
Mitigation techniques such as DNSSEC and DoS prevention provide protection against attacks and raise the assurance level of the overall cloud computing infrastructure. Securing the routing infrastructure is a more vexing challenge that will involve all parties, including clients, cloud computing providers and other service providers.
About the authors:
Diana Kelley is a partner with Amherst, N.H.-based consulting firm SecurityCurve. She formerly served as vice president and service director with research firm Burton Group. She has extensive experience creating secure network architectures and business solutions for large corporations and delivering strategic, competitive knowledge to security software vendors.
Char Sample is a research scientist at BBN Technologies specializing in network security and integration issues.