Leigh Purdie, director at InterSect Alliance (www.intersectalliance.com), says:

Over the last few years, the SANS log management survey has highlighted that the process of actually collecting useful log data has become much easier for organizations. We’d like to think that, as the defacto standard for centralized log collection across a wide variety of platforms, the free/open source Snare series of agents have had a fair bit to do with that.

However, now that collecting log data has become a relatively painless task, IT security teams are flooded with logs. Managing this sort of volume of data over the network is generally not too much of a challenge for those that run a data centre, or network; but the constant stream of information all heading back to a single server, at all times of the day with no let-up, can be draining – particularly if the agents that collect the log data, are not spectacularly careful with their resource utilization. Snare was designed from the start, to be very careful with resources – we try to make sure that we have a very low memory footprint, we push data off the client as quickly as possible so that disk utilization is low, and we implement active filtering to limit log flow to only those events that are likely to be of interest from a security perspective. In addition, an agent-based solution like Snare, in contrast to older agent less ‘octopus-style’ log grabbers, decentralizes processing and filtering, doesn’t require domain-administrator-level privileges, and allows practically real-time dissemination of log data back to a central location in a resilient and responsive fashion – all of which make the life of a system administrator easier, which means that the IT Security team will have more of a chance of getting effective auditing implemented. However, at the end of the day – there is a lot of raw data flowing across the network, which needs to be intelligently managed and analyzed.

Raw data is great for forensics, but raw data doesn’t allow your average system administrator to detect at a glance whether their network is infested with malware, or to produce compliance driven reports. Raw data doesn’t help the people that are responsible for key information resources within an organization, to work out whether their spreadsheets, databases, web content and so on, are getting to the right people – and staying out of the hands of people who shouldn’t have access.

A few years ago, the corporate hierarchy saw IT security as a ‘black box’. They employed people to manage IT related risks on their behalf. As businesses have increasingly become information-focused, and information-dependent, it became more and more obvious that delegating the responsibility for such a key component of the organization was unsustainable. As such, decision makers needed to be broadly aware of the current threats to their information, and be convinced that the countermeasures employed were up to the task of providing reasonable levels of protection. Security logs are a key resource in evaluating both the current threat profile, and also determining the success of the deployed countermeasures.

Turning a couple of gigabytes of raw data into a one-page summary that is tailored to a particular ‘data owner’ or CIO? That generally requires two things – raw computing power is one – and we all have access to that.. but the second is a tool with enough flexibility to map the raw data, into organizational security goals. The trick is, of course – that every organization has different security requirements. One company may focus on the corporate gateway to the Internet, another may have their internal network air-gapped, and will focus on attacks against the public-facing web server. Another may not be as concerned with the gateway environment, but will be critically interested in changes made to a particular internal database by users outside a ‘authorized’ list of staff.