Logs are a fact of life for security and operations teams. These days, we're collecting and analyzing more logs...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
than ever before. The SANS 2012 Log Management Survey revealed that 82% of the participating organizations consider logs critical for tracking suspicious behavior, and almost 60% use agents, Syslog and native OS tools to gather logs within a log management platform. One of the major challenges cited was managing agents on systems that collect and forward logs. As organizations move more systems into cloud environments and deploy applications in those environments too, how can logs be aggregated and monitored appropriately?
If you can detect the compromise a week later and using your own resources, rather than hear it on CNN in nine months, your security posture would be much improved.
Anton Chuvakin, Research Director, Gartner
Fortunately, there are a number of strategies for gathering logs in cloud environments. All of these options involve some drawbacks and tradeoffs, so we'll take a look at the pros and cons of each one to determine what works best for logging in the cloud.
Cloud-based logging possibilities
The most reasonable option for many organizations is to generate logs on systems under their control in Infrastructure as a Service (IaaS) environments. This process is usually straightforward and conforms to the same types of standard logging practices employed within modern enterprise environments. Windows systems generate Windows Event logs, and Unix and Linux platforms generate standard Syslog messages. Windows systems will likely still need a separate software agent installed (like Snare or Kiwi Syslog) to translate events into Syslog format. The real question for enterprises is where should the logs be sent in an IaaS environment.
There are generally three places organizations can send logs. The first option is to set up a local virtual machine in the IaaS environment as a log collector, and then send logs to this machine for aggregation (and potentially analyze them here as well). This is a convenient choice that minimizes overhead and traffic sent between the cloud environment and the enterprise data center. However, this approach also places the logging server in the same area as the systems themselves, which is largely considered a poor security practice. The second option is to send the logs back to the central data center from each individual system. This results in more traffic between the cloud provider and the data center, but removes the local log storage (and thus the risk of compromise). The third option is to send the logs to a managed security service provider who handles logging. There are a number of these providers today, including Loggly, Papertrail, Sumo Logic and Splunk Storm. This is a convenient option that balances security and added traffic, but also increases costs.
Another option for logging in the cloud is to, when possible, leverage the IaaS provider. Very few cloud providers actually perform log management services, but one notable provider that offers hosted log management is Terremark. They can manage logs and export to the customer, or send them to a central aggregation and correlation engine with security information and event management (SIEM) capabilities. Both options are great for clients who already subscribe to Terremark's Enterprise Cloud, which is an IaaS environment.
Using Platform as a Service (PaaS) and Software as a Service (SaaS) providers makes cloud logging trickier. Many PaaS and SaaS providers simply don't provide any logs, or if they do provide logs, they tend to be sparse or in a proprietary format that is difficult to interpret. Operations scheduling also becomes an issue; often, the logs must be downloaded in a "batch" fashion, negating any potential real-time analysis that ties into incident response and intrusion analysis efforts.
Gartner's Anton Chuvakin, a noted log-management expert, says, "When organizations move to public cloud computing, the role of application logging will increase, since in SaaS and PaaS environments familiar OS logs simply don't exist. Sadly, organizations today are having trouble analyzing application logs from traditional on-premises applications, even without the whole cloud aspect blended in."
This is certainly true. Application logs are notably more difficult to parse and interpret, and this problem is only compounded in outsourced arrangements with proprietary infrastructure and software.
From the editors: Explore more IaaS issues
Learn how to encrypt the two kinds of IaaS storage.
Understand the measures companies should take to ensure IaaS security.
Does the notion of real-time analysis matter for cloud-based logging? Although the responses to the SANS survey suggests so, Chuvakin downplays the importance of real-time log analysis. "When organizations think about logging by cloud IT resources, their first problem is simply having the logs for investigations. The need to collect and analyze the logs in real time comes much, much later after the basic log availability, quality and usefulness problems are resolved. If you can detect the compromise a week later and using your own resources, rather than hear it on CNN in nine months, your security posture would be much improved."
Looking forward to cloud-based logging
Ultimately, logging in the cloud requires enterprises to assess many of the same considerations they already have for logging internally. The major differences might include system overhead (log generation increases memory and CPU consumption, which costs money) and safe transit of log data between the cloud provider and the enterprise environment. Utilizing encrypted tunnels minimizes this issue. For now, most organizations have not deployed enough systems and applications in the cloud to create major headaches related to logging. As more and more assets are deployed though, logging in the cloud will definitely become a bigger issue.
About the author:
Dave Shackleford is senior vice president of research and CTO at IANS, and a SANS analyst, instructor and course author. He has consulted with hundreds of organizations in the areas of security, regulatory compliance, and network architecture and engineering. He is a VMware vExpert and has extensive experience designing and configuring secure virtualized infrastructures. He has previously worked as CSO for Configuresoft, CTO for the Center for Internet Security, and as a security architect, analyst and manager for several Fortune 500 companies. Dave is the co-author of Hands-On Information Security from Course Technology as well as the "Managing Incident Response" chapter in the Course Technology book Readings and Cases in the Management of Information Security. Recently, Dave co-authored the first published course on virtualization security for the SANS Institute. He currently serves on the board of directors at the SANS Technology Institute and helps lead the Atlanta chapter of the Cloud Security Alliance.