rvlsoft - Fotolia

Tip

The security implications of serverless cloud computing

Cloudflare Workers is new for serverless cloud computing and introduces benefits and drawbacks for security professionals. Expert Ed Moyle discusses the security side of serverless.

Cloudflare Inc. built a cloud environment called Workers, which doesn't use containers and runs on V8 isolates. This set up has gotten quite a bit of attention.

Workers enables organizations to implement edge serverless cloud functionality that can run in all 155 of Cloudflare's global data center networks with greatly increased efficiency -- and substantially reduced cost -- relative to other serverless cloud approaches. Recent Cloudflare blog posts have outlined in detail how Workers can bring about these efficiency improvements, as well as the architectural elements that enable it to do so.

From a development and operations standpoint, these substantial efficiency gains are extremely compelling. From a security practitioner's point of view, though, it can be challenging and confusing.

Serverless cloud computing is relatively new for many organizations, which means security professionals may have had limited exposure to it and, therefore, relatively little experience securing it. Likewise, much of the information about it is targeted to developers; there may be quite a few unfamiliar concepts security professionals need to weed through to understand how serverless works. This can leave those practitioners with questions like, is the security comparable to a container or virtual machine? And what should they do to evaluate if their organization's usage is appropriately secured?

Understanding the answers to those questions requires you to understand how the model itself works, its architecture, what it can be used for and how your organization might employ it. With that in mind, let's look at what Cloudflare has built and how it works. Moreover, let's explore what aspects security practitioners should have in mind if they work in an organization that is considering employing this approach.

What is Workers?

Workers is a serverless cloud environment. Serverless cloud computing is like a type of PaaS that enables you to write application logic that deliberately abstracts the underlying implementation details -- such as database access and the infrastructure on which the logic runs -- and that is infinitely scalable. The application logic is not actually infinite, but the only limitation from a customer's point of view is its ability to pay the services charges.

Similar to AWS' Lambda@Edge, which enables the deployment of code into Amazon's CloudFront cache, Workers is built to enable developers to author and deploy services that run closer to the end user's browser.

For example, consider a web server that implements business logic for a customer-facing application. Lambda@Edge and Workers enable developers to perform custom actions without touching the actual web server. If a developer wants to insert a custom HTTP header or cookie into the application stream, they can do that using technologies like Lambda@Edge rather than having to update the application code on the web server to do the same thing.

Unlike Lambda@Edge, Cloudflare Workers is built using a feature of V8 called isolates. V8 is the open source JavaScript engine built by Google's Chromium Project for use in Chrome and Chromium browsers. An isolate in V8 terminology is the mechanism that enables individual tabs to maintain their own JavaScript state.

In a browser, there may be multiple tabs open that are running JavaScript. This means that at any given time, multiple tabs each have states and variables that are unique. Because individual Chrome tabs can't access variables from a different tab, an individual browser process needs to be able to switch quickly and seamlessly between different tabs' contexts without one tab's state stepping on another's. An isolate in V8 is the interface and underlying software architecture that enables this.

That's a simplification, but understanding this is the key to understanding how Workers functions. Cloudflare's bet is that, by using this strategy, it can increase performance relative to alternative strategies. And, as the company outlined in its blog, it appears to be right.

There are a couple reasons why this is true. First, multiple isolates can be run within the same OS process, so you don't have to instantiate a new process every time a customer wishes to run their custom code. Second, you use fewer resources than usual if multiple processes are involved.

Security considerations

There are a few things to think about when it comes to serverless cloud security. First, this approach uses JavaScript or WebAssembly and uses them at the edge. This means that use cases are, of necessity, constrained to a certain degree. So you probably aren't going to write millions of lines of code in JavaScript to run a legacy funds transfer system that interfaces with the back-end mainframe.

Second, segmentation in a multi-tenant environment is an important factor to consider. For multi-tenancy, the segmentation model is important, as subverting it could allow one customer to gain access to data from another segment. A hypervisor in an OS virtualization context draws the segmentation boundary between multiple virtual OS instances. A container engine like Docker draws that boundary instead at the process level, so different processes within one operating system instance can run under the purview of different containers.

Isolates further push the segmentation boundary. With isolates, the boundary segmenting the data and execution state from one customer to another can be within the same operating system process.

That's neither a bad thing nor a good thing for security. Over the past few years, there have been segmentation attacks that subvert the segmentation models of both container engines and hypervisors. It doesn't happen often, but it can and does occur occasionally.

Likewise, there have been side-channel attacks that allowed the leakage of data across processes and Rowhammer techniques that can cause data manipulation across segmentation boundaries. It's possible that leaks could occur just as they could with any other technology in a multi-tenant context.

The most important thing is that the customers understand segmentation, and that they combine that understanding with information about the application being written. By systematically analyzing the application behavior and usage envisioned by an organization -- such as by employing a methodology like application threat modeling -- you can evaluate usage, where it would be appropriate to implement countermeasures if needed to bolster the segmentation model, and otherwise ensure robust application security practices.

Dig Deeper on Application and platform security

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close