238
Log Management . Defensia 2012 Rafel Ivgi This book introduces the common Log Management issues, tasks, techniques and methodologies that should be implemented in order to enable real time monitoring comply with regulations and engage in forensic investigations

Siem & log management

Embed Size (px)

Citation preview

Page 1: Siem & log management

Log Management

.

D e f e n s i a

2 0 1 2

Rafel Ivgi

This book introduces the common Log Management

issues, tasks, techniques and methodologies that

should be implemented in order to enable real time

monitoring comply with regulations and engage in

forensic investigations

Page 2: Siem & log management

2 | P a g e

TABLE OF CONTENTS

TABLE OF CONTENTS ........................................................................................................................... 2

Introduction ................................................................................................................................. 9

Why Does Log Data Matter? ....................................................................................................... 9

Why Are People Collecting Log Data? ....................................................................................... 10

What Do Logs Have To Offer? ................................................................................................... 10

Log Management Terminology ................................................................................................. 11

System Health Monitoring..................................................................................................... 12

Forensics ................................................................................................................................ 13

Regulatory Compliance ......................................................................................................... 13

How Are Organizations Using Log Data? ................................................................................... 15

Analysis .................................................................................................................................. 15

Log Retention ............................................................................................................................ 16

What Are Companies Using for Log Management? .................................................................. 17

SIEM Evaluation Criteria ............................................................................................................ 19

Ability to Execute ................................................................................................................... 19

Completeness of Vision ......................................................................................................... 19

What Are the Pain Points with Log Analysis? ............................................................................ 35

Collecting Logs ....................................................................................................................... 36

MetaSploit Pro Gets SIEM, Cloud Integration ....................................................................... 37

Analyzing Logs ........................................................................................................................... 39

Sharing Log Data and Chain of Custody................................................................................. 40

What is event log management software (ELMS)? ................................................................... 41

Log Management Key Features and Technology ...................................................................... 47

Log Management Deployment life-cycle .................................................................................. 47

Determining Organizational Requirements – Basic Methodology ................................................ 48

List/Mapping of Log Generating Devices ...................................................................................... 48

Which Systems create log files? ................................................................................................ 48

What kind of information is in log files by default? .................................................................. 49

Page 3: Siem & log management

3 | P a g e

Which information can be defined to appear in log files? ........................................................ 49

System Sizing ................................................................................................................................. 50

Why and how to calculate your Events per Second .................................................................. 50

Why And How-to Calculate Your Events Log Size ..................................................................... 56

Key Differentiators between SIEM Products ................................................................................. 58

FIREWALL AND INTRUSTION PREVENTION/DETECTION SUPPORT ........................................... 58

SERVER OS SUPPORT ................................................................................................................. 58

LOG COLLECTION AND MANAGEMENT ..................................................................................... 59

RULES ......................................................................................................................................... 59

INCIDENT MANAGEMENT ......................................................................................................... 60

REPORTING ................................................................................................................................ 60

COMPANY STABILITY ................................................................................................................. 61

Complying with Laws and regulations ........................................................................................... 62

SOC, HIPPA, PCI-DSS, ISO 27001/2 ............................................................................................ 62

PCI DSS Compliance - Periodic Operational Task Summary .................................................. 62

Implementing the 20 Critical Controls with Security Information and Event Management (SIEM)

Systems .......................................................................................................................................... 65

Abstract and Introduction ......................................................................................................... 65

Goals and Philosophies of the Top 20 Critical Controls............................................................. 65

Evaluating the Relationship between the Controls ................................................................... 66

Critical Controls and SIEM ......................................................................................................... 70

Post-Implementation Value of SIEM in Light of the Controls ................................................... 71

Risk Management .................................................................................................................. 71

Ongoing, Actionable Metrics ..................................................................................................... 72

Auditing the Effectiveness of Controls ...................................................................................... 72

Things to Keep in Mind .............................................................................................................. 73

Conclusion ................................................................................................................................. 73

Log Collection ................................................................................................................................ 74

Comprehensive Log Data Collection and Log Management ................................................. 74

Cross-platform Log Collection ................................................................................................... 74

Page 4: Siem & log management

4 | P a g e

Universal Database Log Collection and Log Management .................................................... 76

Agent-less and Agent-based collection ................................................................................. 76

Scalable Log Centralization .................................................................................................... 77

Log Archiving and Retrieval ................................................................................................... 77

Activity Auditing .................................................................................................................... 77

How to Collect Log Files According to the Law Requirements .............................................. 78

Collecting Windows logs using Snare .................................................................................... 82

Open source and free log analysis and log management tools ................................................ 91

The open source log management tools are: ........................................................................ 91

LogBinder - Getting Logs That Make Sense from SQL, Exchange and SharePoint ................ 93

The Importance of Log Time Synchronization ....................................................................... 93

Servers and Systems That Need to Collect Log Files ............................................................. 94

Critical Logs and Alerts to be collected from all .................................................................... 94

Comprehensive Log Data Collection and Management ........................................................ 94

Scalable Log Centralization .................................................................................................... 97

Log Archiving and Retrieval ................................................................................................... 97

Choosing Your Solution ................................................................................................................. 98

Log Management, Analysis, Correlation, and Compliance Reporting ....................................... 98

Overview:............................................................................................................................... 98

Benefits: ................................................................................................................................. 99

Log Analysis ........................................................................................................................... 99

Security Event Management ................................................................................................... 103

Live & Near Real-Time Monitoring .......................................................................................... 109

Detecting Important Events .................................................................................................... 113

Challenges with Log Management .............................................................................................. 113

Lack of Standard Log Format ................................................................................................... 113

Reacting: Traditional Approach to Computer Management ....................................................... 114

Reacting to Events ....................................................................................................................... 119

Defining Top Event Scenarios .................................................................................................. 119

Designing & Defining the response actions ............................................................................. 121

Page 5: Siem & log management

5 | P a g e

Automated Remediation That Works for You ..................................................................... 121

Advanced Threat Detection & Response (External) ........................................................... 123

Advanced Threat Detection & Response (Internal) ............................................................ 123

Compliance Automation & Assurance ................................................................................. 124

Operational Intelligence & Optimization ............................................................................ 124

Specific SIEM Script Implementation ...................................................................................... 125

Monitoring pastebin.com within your SIEM ....................................................................... 125

Tracking Tweets in your SIEM.............................................................................................. 130

Monitoring WordPress Blogs within a SIEM (ArcSight Example) ........................................ 131

Examples Log Analysis from many Log Sources .......................................................................... 132

Case Studies from Real Business Cases ....................................................................................... 132

SIEM Based Intrusion Detection .................................................................................................. 132

Introduction ......................................................................................................................... 132

System Setup ....................................................................................................................... 135

Suspicious Traffic and Services ............................................................................................ 137

SMTP, IRC and DNS .............................................................................................................. 137

Suspicious Outbound Internet Traffic ................................................................................. 139

New Hosts and Services ....................................................................................................... 140

Darknets .............................................................................................................................. 142

Windows Account Creation and Permissions ...................................................................... 146

Foreign Country Logins ........................................................................................................ 147

Web Application Attacks ..................................................................................................... 149

SQL Injection and Cross Site Scripting ................................................................................. 150

Data Exfiltration ................................................................................................................... 153

Detecting Client Side Attacks .............................................................................................. 157

Conclusion ........................................................................................................................... 161

SIEM Tools Have Blind spots! .................................................................................................. 162

Forensics ...................................................................................................................................... 164

What does a cyber-attack looks like? ...................................................................................... 164

Creating Your Own SIEM and Incident Response Toolkit Using Open Source Tools ............... 165

Page 6: Siem & log management

6 | P a g e

Introducing an Incident Response Toolkit ........................................................................... 165

SIEM: The Core of the Toolkit .............................................................................................. 165

Reasons Not to Create Your Own SIEM ............................................................................... 166

How to Create a Toolkit and SIEM ....................................................................................... 166

Data enrichment for simpler and faster forensic analysis ...................................................... 173

Inspecting Past Incidents: ........................................................................................................ 173

Reporting ..................................................................................................................................... 174

Open Source Solutions ................................................................................................................ 174

SPLUNK .................................................................................................................................... 174

Security ................................................................................................................................ 174

Compliance .......................................................................................................................... 175

Network Operations ............................................................................................................ 175

Log Management Using Splunk ........................................................................................... 176

Security’s New Challenges ................................................................................................... 176

Splunk: Big Data and Security Intelligence .......................................................................... 177

Real-time Forensics Operationalized ................................................................................... 179

Metrics and Operational Visibility ....................................................................................... 180

Reaching For Security Intelligence ...................................................................................... 180

Monitoring Unknown Threats ............................................................................................. 180

Supporting the Security Intelligence Analyst ...................................................................... 181

Protect Your IT Infrastructure from Known and Unknown Threats .................................... 181

Splunk and the Unknown Threat ......................................................................................... 181

The Splunk App for Enterprise Security ............................................................................... 182

Watching for ‘Unknown Threats’ from Advanced Persistent Attackers ............................. 182

Splunk: A big data solution – finding the ‘Unknown Threat’ ............................................... 183

Compliance Solutions .......................................................................................................... 183

FISMA Compliance ............................................................................................................... 184

PCI Compliance .................................................................................................................... 186

Sec Compliance .................................................................................................................... 188

HIPAA ................................................................................................................................... 189

Page 7: Siem & log management

7 | P a g e

Splunk Enterprise................................................................................................................. 190

Log Management in the Cloud .................................................................................................... 195

Basic Cloud Based Log Management – Data Flow ................................................................... 195

Basic Cloud Based Log Management – Players ....................................................................... 196

LogEntries ............................................................................................................................ 196

Papertrail ............................................................................................................................. 197

SumoLogic ........................................................................................................................... 201

Log Logic .............................................................................................................................. 203

Managed Security Services – Top World Providers Analysis ................................................... 207

VMware’s Terramark provides cloud based SIEM (MSS –Managed Security Services),

Integrates with Tripwire for online integrity and data monitoring and Provides Cloud and On-

Site forensics ........................................................................................................................... 209

IBM Managed Security Services .......................................................................................... 211

Trustwave Managed SIEM ....................................................................................................... 212

StillSecure - Cloud Security Services Platform ......................................................................... 213

CloudAccess Cloud Based SIEM ............................................................................................... 214

RSA Envision - Cloud Managed SIEM Providers ...................................................................... 215

HP ArcSight - Cloud Managed SIEM Providers ........................................................................ 218

Rackspace Cloud Monitoring ................................................................................................... 220

Cloud Based Alerting Infrastructure ........................................................................................ 221

Locate Resources ..................................................................................................................... 222

Cloud Based Anomaly Detection – Anomaly Checking vs. Other Organizations ................. 223

Incident Response, Notification, and Remediation ............................................................. 223

What's in a cloud security plan? .............................................................................................. 225

Managed Cloud-Based SIEM Service ....................................................................................... 227

Business Drivers ................................................................................................................... 227

Challenges and Needs ......................................................................................................... 227

Managed Cloud-Based SIEM Service Overview ................................................................... 227

Business Benefits ................................................................................................................. 228

SIEM in the Cloud .................................................................................................................... 229

Enterprise SIEM in the Cloud ............................................................................................... 229

Page 8: Siem & log management

8 | P a g e

SIEM for Managed Security Service Providers .................................................................... 230

NetForensics can and does.... SIEM in the Cloud ................................................................ 230

A Modern SIEM: IT Security Intelligence ............................................................................. 231

QRadar Risk Manager: ......................................................................................................... 231

High Availability solution that delivers continuous network security monitoring: ............. 232

Network Activity Collectors: ................................................................................................ 232

Virtual Activity Collectors: ................................................................................................... 232

EventTracker Cloud ............................................................................................................. 232

LogMojo Logging & Archival for FortiGate Firewalls ........................................................... 235

Try Before You Buy .................................................................................................................. 236

Questions for the Cloud Provider ............................................................................................ 237

Considerations for In-House Log Management .......................................................................... 237

Summary...................................................................................................................................... 237

Page 9: Siem & log management

9 | P a g e

Introduction

Log management (LM) comprises an approach to dealing with large volumes of computer-

generated log messages (also known as audit records, audit trails, event-logs, etc.). LM covers

log collection, centralized aggregation, long-term retention, log analysis (in real-time and in bulk

after storage) as well as log search and reporting.

Log management is driven by reasons of security, system and network operations (such

as system or network administration) and regulatory compliance.

Effectively analyzing large volumes of diverse logs can pose many challenges — such as huge

log-volumes (reaching hundreds of gigabytes of data per day for a large organization), log-

format diversity, undocumented proprietary log-formats (that resist analysis) as well as the

presence of false log records in some types of logs (such as intrusion-detection logs).

Users and potential users of LM can build their own log management and intelligence tools,

assemble the functionality from various open-source components, or acquire (sub-) systems

from commercial vendors. Log management is a complicated process and organizations often

make mistakes while approaching it.

The advantages of log management extend well beyond security to system health monitoring,

forensics, regulatory compliance and marketing. Log monitoring that detects system problems

early can affect the bottom line by minimizing overtime and outages due to system failures.

Legal fees can be reduced by having solid forensic evidence to support business decisions.

Marketing groups can gain insight into what products people are interested in on a web site.

Regulatory compliance is a relatively new reason for log management that has become a

necessity with SOX, HIPAA and other regulations. These reasons have helped spawn a $380

million log management market.

Why Does Log Data Matter?

Logs contain important information that can make the difference in passing an assessment,

stopping an intrusion and knowing where to make repairs on the network. When something

goes wrong, logs are always the first thing support technicians want to look at. That’s why every

good security course talks about turning on logging and gives ideas on how to tune logging.

Logs are how we tell what computers have been doing, something you can’t see just by looking.

Last summer, when I returned home from vacation to find water dripping from the kitchen

ceiling, I didn’t need any logs to know what was happening. I could see the water, water flows

downhill, and I knew the bathroom was on the floor above it. We simply can’t do that with

computers. Log data is our only way to see what’s going on and what went wrong. They can

even sometimes give an early warning.

Page 10: Siem & log management

10 | P a g e

Log data matters because it gives us a way to see what our computers have been up to – using

the term ‘computers’ very loosely. What we’re really talking about is any device associated with

computing – including switches, routers, firewalls, applications, databases, appliances and

computers. The list of sources for logs is almost endless.

Why Are People Collecting Log Data?

In a question about how log management would most benefit their organization, 51 percent of

respondents picked event detection. Trailing as the second most important option was day-

today IT operations at 13 percent. Among the Global 2000, the same two options were first and

second with 39 percent of that group picking event detection and 23 percent picking operations.

Regulatory compliance was ranked third most important in both groups.

When these same questions are analyzed against the various industries represented by the

survey, all segments – except for healthcare – stated that they collected logs because of their

value to detect and analyze security and performance incidents. Not surprisingly, respondents

from the heavily-regulated healthcare industry selected compliance with regulations or proving

compliance with regulations and standards as their primary reason for log data collection.

Remaining industries, including financial, ranked compliance third in order of importance.

Among healthcare respondents, assessing IT incidents and minimizing downtime was selected

second; detecting and analyzing security and performance incidents was third, and detecting

and analyzing security and performance incidents was the fourth most important reason.

What Do Logs Have To Offer?

The ancient city of Babylon was thought to be so secure that, while the Persian army laid siege

to it, the city didn't bother to monitor its perimeter defenses, choosing instead to party. By the

next morning, arriver running through the city had been diverted to allow the Persian army

Page 11: Siem & log management

11 | P a g e

access, the city had been overthrown and a new king installed. According to a recent survey of

system administrators, network logs often get as much attention as the perimeter defenses of

Babylon. Firewalls and Intrusion Detection Systems are often installed and forgotten, leaving

attackers free to test the network until they finally find a hole while the system administrator

relaxes with a false sense of security.

Many times, computer consultants and help desk personnel are called in to help out with a

problem and find that the best information about the problem is the recollection of

administrators and operators. End-users often view the error messages they get as

“meaningless goblet-gook” or “one of those clicklock-boxes”. Those are both quotes from end-

users and anyone involved in end-user support could add similar responses. Without accurate

error messages, it is often impossible to correctly diagnose the root cause of the problem.

System administrators can often configure systems to store those messages considerably longer

than the defaults but according to this survey, this is not likely.

In one recent analysis of a compromised system running Microsoft Small Business Server, it

appeared likely that the server was compromised in October 2005, but it was March of 2006

when they detected the problem. The oldest security logs to be found were just under 3 days

old, not nearly enough to provide information about what happened. This small business ended

up having to completely rebuild the server, causing downtime for the company and overtime

work for the system administrator. For a relatively small company, this was a considerable

expense. Large companies are often dealing with logs in much the same manner as the smaller

ones and the resulting costs being proportionally higher.

We've identified four (4) key areas where log management can offer substantial value to a

company. These areas are often represented with additional sub-classifications but these four

cover it all:

System Health Monitoring

Forensics

Regulatory Compliance

Marketing

The marketing use for log data was not represented in the survey results (an example would be

the number of “hits” to a Web site).

Log Management Terminology

Collection - Most modern computer equipment is capable of producing copious

amounts of information about what it is doing. Often this data is stored in memory and

is only available for a short period of time.

Page 12: Siem & log management

12 | P a g e

By collecting this data, it is stored so that it can be recalled and processed later. One

issue is to ensure that ALL log data is actually collected, especially during a network

problem or when a lot of log data is being generated.

Normalization - each piece of computer equipment generates log data in its own format.

The process of normalization involves converting logs in their existing formats to a

common, more readily accepted format. A good example is the date fields in logs: some

systems store the data in local time while others use GMT, some use AM/PM while

others uses a 24 hour clock, and there are many other variants. Normalization allows

events from different sources to have a common format for the timestamp.

Normalization is not limited to times, all data fields need to be normalized as much as

possible. One concern with normalized logs is that they have been changed from the

original. If logs are “normalized”, it is often best to maintain a copy of the original for

forensic purposes.

Aggregation - combining log data into a common data store so that data from individual

events can be examined as a whole. Log aggregation enables the Log Analyst to see

events from the network as a whole, rather than seeing them all as individual bits of

information.

Correlation - finding links between seemingly different and/or distributed events and

identifying a root cause that links them together. An example of this might be a general

network slowdown, an increase in blocked ports on a firewall and a new virus being

released.

Alerting -If administrators fail to examine the logs and the indicators that they have

access to, they will miss out on finding things that are going on until it becomes obvious

to everybody. A step beyond examination is to have the system send out an alert when

pre-determined thresholds have been exceeded. In more advanced systems, alerting

can be based on algorithms that are self-learning and make use of statistical anomaly

detection.

Analysis (Decision support) - It is good to have the logs and get alerted when there is an

issue but if the log management system can assist with decision support, that is even

better. Some advanced systems will pull all the information you need together so that in

a single mouse-click, you can pull up related information.

System Health Monitoring

Over half of the people surveyed used logging to monitor the health of their networks. For half

of these (25% of the total number surveyed), monitoring the health of the network was the only

thing they did with their log data. “Used properly, system logs are like the pulse of a system. A

log can often explain sources of configuration problems or foretell of impending hardware

failures.” (Forrest Hoffman, “HPC Logging with syslog-ng, Part One Extreme Linux”, Linux

Magazine, November 2005) In the case of the small business mentioned earlier, there had been

Page 13: Siem & log management

13 | P a g e

numerous network problems in the beginning of 2006, but the lack of consistent log monitoring

and automated alerting allowed the problem to go undetected.

Forensics

Approximately one third of the companies surveyed use their logs for forensic applications.

Eleven percent (11%) used the logs exclusively for forensic research. This is an area that has not

received as much attention as other areas but it offers quite a bit of value. In February 2006, SC

Magazine awarded LogLogic the Reader Trust Award in the Best Computer Forensics category

for their log management appliances.

In any forensic investigation, there is always a need for more evidence from different sources.

This evidence is used to build a timeline of events. In a recent case where a government web

server was defaced, the PIX firewall logs which had been forwarded to their log server showed

the exact time of the connection that defaced the web site. The logs also showed a number of

other attempts. By matching up the times that the connections were built and subsequently

torn down, it was possible to identify how long the attacker was connected to the system.

In this case, the attacker was connected for less than 2 seconds. The log data suggests that this

was a scripted attack. The same attack was launched against 4 web servers within a few

minutes, two of which were compromised. This same attacker also attacked an important

government target around the same time and the collection of logs from various sites was used

to identify and apprehend the attacker. The story doesn't always end so well or quickly but

without log data, there would not have even been a case.

Often a solid log trail can be used to avoid expensive litigation. In another recent case, an

employee was fired for some Internet-based activity. There were initial threats of legal action

related to unemployment compensation, but once the former employee was presented with an

overwhelming amount of evidence, the case was dropped. This was a case where the existence

of log data was able to save thousands of dollars in litigation costs and unemployment

compensation costs.

Regulatory Compliance

Another third of survey respondents used their logs for regulatory compliance, with ten percent

(10%) using them solely for compliance-related issues. Compliance with standards such as the

NIST guide for HIPAA implementation1, The Sarbanes-Oxley Act (SOX) of 20022, the VISA

Cardholder Information Security Program (CISP) and 3 others has been a strong force in

increasing the awareness of maintaining log data.

Many companies incur the expense of maintaining the required log data without realizing the

additional benefits of fully utilizing the log information.

Page 14: Siem & log management

14 | P a g e

The length of time that logs need to be maintained varies depending on the industry. The NIST

guide for HIPAA implementation requires that logs be maintained for a minimum of 6 years

(page 84) and 7 years in other cases (page 85). SOX requires that “…audit work papers, and

other information to related to any audit report, in sufficient detail to support the conclusions

reached in such report” be maintained for 7 years in section 103. In addition to these

requirements, the Graham-Leach-Bliley Act of 19994, various accounting agencies recommend

or require that logs and documentation be kept for a varying number of years.

The HIPAA Security Rule requires “procedures to regularly review records of information system

activity, such as audit logs, access reports, and security incident tracking reports”

164.308(a)(1)(ii)(D)). The NIST guide for HIPAA implementation recommends a log review at

least twice a week (page 24) and a yearly review (page 85) of all documentation.

Marketing

One additional use for log data that is often overlooked is marketing. Some IT people have a lack

of understanding of the marketing side of the business. Likewise, many marketing people don't

have enough technical background to know how to communicate with the IT group. IT

departments would do well to try to bridge this gap to the advantage of the company.

If a business notices a strong increase of interest in a certain product, it could be profitable to

increase the emphasis of that product. Web traffic is often used to measure the impact of a

marketing emphasis. (Jeffery H. Rubin, Network Computer, September 16, 2004).

IT people need to learn the value of marketing. If the product or service doesn't get to market,

the dollars don't come in to pay their salaries or for the tools they need to do their job.

Page 15: Siem & log management

15 | P a g e

How Are Organizations Using Log Data?

When determining how they most benefit from use of their log data, compliance came in fourth

on the list. Respondents were asked to pick the three most important ways log management

could benefit their organizations and rank them in order. This included security alerting,

compliance reporting, forensics, and system maintenance and information asset protection.

A rating average was derived from the choices that were made. “Security alerting” topped the

list with a rating average of 2.17. Information asset protection followed closely with a rating of

2.15. System maintenance came in third with a rating of 2.08. The bottom of the list included

compliance reporting with a rating of 2.00 and forensics with a rating of 1.34. What this means

is that organizations that are strong in compliance also see other ways they can use this data to

their benefit.

Analysis

In the 2008 SANS Log Management Survey, 78 percent of respondents said their reason for

collecting log data was “Detection and Analysis of Security and Performance Incidents.” That’s

up from 46 percent in 2006. In this year’s survey, we dropped the word “automatic” from this

choice, so it’s safe to assume that part of the increase this year is due to the change in the

wording. In itself, this is interesting because it reveals that some people are doing automatic

detection and analysis of security and performance incidents, while others are also using logs for

detection and analysis, but they don’t have the automation piece.

Page 16: Siem & log management

16 | P a g e

However, the survey was also able to show that more respondents are doing automation this

year than last year. The use of log analysis appliances is up from 10 percent in 2007 to 19

percent in the 2008 survey. When asked what they use automation for, the clear winner at 65

percent was “Use the logs after the fact to help troubleshoot.” Some 25 percent indicated a

homegrown application with daily automated review using keyword detection, up from five

percent in the 2007 survey.

When it came to stating what changes they would like to make to their log management

systems, one of the main responses was about the need for automation of alerting for system

and security events. Another key change people would like is the correlation of events between

different devices. What’s telling is that, while automated appliance usage is up, correlation is

slipping. In the 2008 survey, 32 percent do automated event correlation, as compared with 42

percent last year.

Among the Global 2000 companies, 52 percent are doing automated correlation this year

compared to 72 percent last year. Another area where organizations need more automation is

in analyzing security and performance issues together, as we will explore later in this paper.

Log Retention

In another question, 47 percent of survey respondents say compliance is driving log retention

policy at their organizations. In the 2008 survey, when people were asked how long they

maintain logs, the largest group (19 percent) maintained logs for one to two years. In the 2007

survey, the largest group at 14 percent was “Operating System Default/Not Sure.” In this year’s

survey, 35 percent of respondents indicate that they intend to keep log data longer than one

year, up from 20 percent in the 2007 survey.

The majority of the respondents who selected the option “other” as their reason for collecting

logs mentioned specific regulations or requirements and security issues. This amounted to 5

percent of respondents that have security- and regulatory-related reasons for log data collection

but didn’t feel that they quite fit into the categories that we had defined.

Compliance is a larger factor for some business segments than others. The retail segment

seems to be the most affected by compliance issues. Some 71 percent of respondents in the

retail segment, under new and evolving PCI DSS standards, said that their log retention policy

was driven by compliance. PCI compliance has specific recommendations about the storage of

log data. Some 66 percent of healthcare respondents also ranked regulatory drivers as their

reason for retaining log data. Following that, 55 to 57 percent of respondents in the

government, manufacturing and telecommunications sectors responded similarly.

Larger companies (Global 2000) collect logs for many of the same reasons, with a higher

percentage of Global 2000 organizations collecting log data for compliance and regulatory

purposes. The one category that the Global 2000 chose less often this year than overall

Page 17: Siem & log management

17 | P a g e

respondents chose is the assessment of IT incidents and the minimization of downtime, which

indicates log management is not being used to its potential in troubleshooting as well as alerting

in larger organizations. This is down from the 2007 survey, in which both the Global 2000 and

the other companies showed nearly the same amount of interest in this category.

What Are Companies Using for Log Management? Seventy-three percent of respondents indicated that they had log servers in this year’s survey.

In the 2007 survey, 57 percent of respondents indicated that they had log servers – up from 35

percent in 2006. That is a particularly interesting statistic given that 64 percent of respondents

are not satisfied with their log management solutions. Still, they are attempting some form of

log management anyway, which indicates that they’re aware of the value of their log data.

In this year’s survey, instead of allowing for only one answer to what vendor is used for log

management, we allowed respondents to check all that applied. This yielded a surprise – many

companies use more than one log management vendor. This doesn’t help with a comparison of

log management vendors from prior years – some went up and some went down – but the

“vendor” that was selected the most often was “Homegrown Solution.” Thirty-eight percent of

overall respondents selected this category as one of their log management vendors while 46

percent of the Global 2000 companies selected that option. It will be interesting to follow this

number in the coming years.

We also asked them to rate their level of satisfaction. Thirty five percent of the respondents

overall were satisfied with their log management, and 42 percent of the Global 2000 were

satisfied. Of those companies that are satisfied, we examined what they were using.

“Homegrown” was the largest single vendor category and 27 percent of both the Global 2000

Page 18: Siem & log management

18 | P a g e

and the total group of respondents were satisfied with their home grown management systems.

Large vendor products tended to score less – between 17-25 percent overall satisfaction, with a

few specialty vendors ranking 50 and one ranking 70 percent satisfaction.

Page 19: Siem & log management

19 | P a g e

SIEM Evaluation Criteria

Ability to Execute

• Product/service evaluates the vendor’s ability and track record to provide product

functions in areas such as log management, compliance reporting, security event

management, and deployment simplicity.

• Overall viability includes an assessment of the organization’s financial health, the

financial and practical success of the overall company, and the likelihood that the

business unit will continue to invest in the SIEM technology segment.

• Sales execution/pricing evaluates the technology provider’s success in the SIEM

market and its capabilities in presales activities. This includes SIEM revenue and the

installed base size, growth rates for SIEM revenue, and the installed base, presales

support and overall effectiveness of the sales channel. The level of interest from

Gartner clients is also considered.

• Market responsiveness and track record evaluates the match of the SIEM offering

to the functional requirements stated by buyers at acquisition time, and the

vendor’s track record in delivering new functions when they are needed by the

market. Also considered is how the vendor differentiates its offerings from those of

its major competitors.

• Marketing execution evaluates the SIEM marketing message against our

understanding of customer needs, and also evaluates any variations by industry

vertical or geographic segments.

• Customer experience is an evaluation of product function or service within

production environments. The evaluation includes ease of deployment, operation,

administration, stability, scalability and vendor support capabilities. This criterion is

assessed by conducting qualitative interviews of vendor-provided reference

customers in combination with feedback from Gartner clients that are using or have

completed competitive evaluations of the SIEM offering.

• Operations are an evaluation of the organization’s service, support and sales

capabilities, and include an evaluation of these capabilities across multiple

geographies.

Completeness of Vision

Market understanding evaluates the ability of the technology provider to understand

buyer needs and to translate those needs into products and services. SIEM vendors that

show the highest degree of market understanding are adapting to customer

requirements in areas such as log management, simplified implementation and support,

and compliance reporting, while also meeting SEM requirements.

Marketing strategy evaluates the vendor’s ability to effectively communicate the value

and competitive differentiation of its SIEM offering.

Page 20: Siem & log management

20 | P a g e

Sales strategy evaluates the vendor’s use of direct and indirect sales, marketing, service,

and communications affiliates to extend the scope and depth of market reach.

Offering (product) strategy is the vendor’s approach to product development and

delivery that emphasizes functionality and feature sets as they map to current

requirements for SIM and SEM. Development plans during the next 12 to 18 months are

also evaluated.

Because the SIEM market is mature, there is little differentiation between most vendors in

areas such as support for common network devices, security devices, operating systems,

consolidated administration capabilities and asset classification functions. In this evaluation,

we neutralized relative ratings of vendors with

Capabilities in these areas, but there is a severe “vision” penalty for the few vendors

that continue to have shortcomings in this area. This change has the effect of improving

the visibility of relative differences in other functional areas.

In this year’s SIEM vendor evaluation, we have placed greater weight on capabilities that

aid in targeted attack detection:

We evaluate data access monitoring capabilities, which are composed of file integrity

monitoring (native capability and integration with third-party products), data loss

prevention (DLP) integration, and database activity monitoring (direct monitoring of

database logs and integration with database activity monitoring [DAM] products.

We evaluate user activity monitoring capabilities, which include monitoring of

administrative policy changes and integration with IAM technologies, for automated

import of access policy for use in monitoring.

Our evaluation of application layer monitoring capabilities includes integration with

third-party applications (e.g., ERP financial and HR applications, and industry vertical

applications), for the purpose of user activity and transaction monitoring at

that layer; the external event source integration interface that is used to define the log

format of an organization’s in-house developed applications; and the ability to derive

application context from external sources.

We evaluate vendor capabilities and plans for profiling and anomaly detection to

complement existing rule-based correlation.

Many SIEM vendors are now positioning the technology as a platform. There is a focus on

expansion of function – security configuration assessment, vulnerability assessment, file

integrity monitoring, DAM and intrusion prevention system (IPS). This year, we have included an

evaluation of SIEM vendor platform capabilities in our overall assessment of Completeness of

Vision Despite the vendor focus on expansion of capability, we continue to heavily weight

deployment simplicity. Users still value this attribute over breadth of coverage beyond the core

use cases. There is a danger of SIEM products (which are already complex) becoming too

Page 21: Siem & log management

21 | P a g e

complex as vendors extend capabilities. Vendors those are able to provide deployment

simplicity as they add function will ultimately be the most successful in the market.

Vertical industry strategy evaluates vendor strategies to support SIEM requirements

that are specific to industry verticals.

Innovation evaluates the vendor’s development and delivery of SIEM technology that is

differentiated from the competition in a way that uniquely meets critical customer

requirements. Product capabilities and customer use in areas such as application layer

monitoring, fraud detection and identity-oriented monitoring are evaluated, in addition

to other capabilities that are product specific and are needed and deployed by

customers.

This year, we have made changes in the way we evaluate innovation. There is a stronger

weighting of capabilities that are needed for security monitoring and targeted attack

discovery – real-time event management, user activity monitoring, data access monitoring,

application activity monitoring and capabilities/plans for profiling, and anomaly detection.

Geographic strategy. Although the SIEM market is currently centered in North America,

there is growing demand for SIEM technology in Europe and Asia/Pacific, driven by a

combination of compliance and threat management requirements. As a consequence,

our overall evaluation of vendors in this Magic Quadrant now includes an evaluation of

vendor sales and support strategies for these geographies.

Page 22: Siem & log management

22 | P a g e

Leaders

The SIEM Leaders quadrant is composed of vendors that provide products that are a good

functional match to general market requirements, have been the most successful in building an

installed base and revenue stream within the SIEM market, and have a relatively high viability

rating (due to SIEM revenue, or SIEM revenue in combination with revenue from other sources).

In addition to providing a technology that is a good match to current customer requirements,

Leaders also show evidence of superior vision and execution for anticipated requirements.

Leaders typically have relatively high market share and/or strong revenue growth, and have

demonstrated positive customer feedback for effective SIEM capabilities and related service and

support.

Challengers

The Challengers quadrant is composed of vendors that have a large revenue stream (typically

because the vendor has multiple product and/or service lines), at least a modest-size SIEM

customer base and products that meet a subset of the general market requirements. Many of

Page 23: Siem & log management

23 | P a g e

the larger vendors in the Challengers quadrant position their SIEM solutions as an extension of

related security and operations technologies.

Companies in this quadrant typically have strong execution capabilities, as evidenced by

financial resources, a significant sales and brand presence garnered from the company as a

whole, or other factors. However, Challengers have not demonstrated as rich a capability or

track record for their SIEM technologies as vendors in the Leaders quadrant have.

Visionaries

The Visionaries quadrant is composed of vendors that provide products that are a good

functional match to general SIEM market requirements, but have a lower Ability to execute

rating than the Leaders. This lower rating is typically due to a smaller presence in the SIEM

market than the Leaders, as measured by installed base or revenue size or growth, or by smaller

overall company size or general viability.

Niche Players

The Niche Players quadrant is composed primarily of smaller vendors that are regional in focus,

or provide SIEM technology that is a good match to a specific SIEM use case, a subset of SIEM

market requirements. Niche Players focus on a particular segment of the client base or a more-

limited product set. Their ability to outperform or innovate may be affected by this narrow

focus. Vendors in this quadrant may have a small or declining installed base, or be limited,

according to Gartner’s criteria, by a number of factors.

These factors may include limited investments or capabilities, a geographically limited footprint,

or other inhibitors to providing a broader set of capabilities to enterprises now and during the

12-month planning horizon. Inclusion in this quadrant does not reflect negatively on the

vendor’s value in the more narrowly focused service spectrum.

Page 24: Siem & log management

24 | P a g e

Page 25: Siem & log management

25 | P a g e

Page 26: Siem & log management

26 | P a g e

Page 27: Siem & log management

27 | P a g e

Page 28: Siem & log management

28 | P a g e

Page 29: Siem & log management

29 | P a g e

Page 30: Siem & log management

30 | P a g e

Page 31: Siem & log management

31 | P a g e

Page 32: Siem & log management

32 | P a g e

Page 33: Siem & log management

33 | P a g e

Page 34: Siem & log management

34 | P a g e

Page 35: Siem & log management

35 | P a g e

What Are the Pain Points with Log Analysis?

In this year’s survey, we asked people to rate aspects of the log lifecycle as critical, important

and least important. Slightly over 50 percent of respondents rated collecting logs as their most

critical issue. This was followed by searching log data and reporting on log data. Under the least

important rating was sharing data followed by maintaining chain of custody.

Page 36: Siem & log management

36 | P a g e

The overall story from all of the responses is that getting useful data is too difficult but that it

would be valuable. In a “free-form” question at the end of the survey, people were asked for

additional comments. Alerting, querying, reporting, correlating, and analyzing were the most

common topics. A few respondents commented that “the right people aren’t looking at the

data,” that “nobody is looking at the data,” or that “people aren’t making the necessary effort to

understand the log data.” One respondent commented on the difficulty of finding time in his

day to implement a product that had been purchased. Another comment was simply, “start.”

Both of these comments point to a shortage of manpower. Lack of budget also was mentioned

a number of times.

Collecting Logs

Just slightly over half (51 percent) of survey respondents ranked collecting logs as their most

critical challenge in the log management lifecycle. This is actually an optimistic sign because it

suggests that people are at least trying to collect and store their logs so they can get value from

their log data. According to this year’s survey, 80 percent are collecting, and 67 percent are

archiving logs. In the 2005 survey, only two percent of companies surveyed stored their logs

longer than one year. In this year’s survey, 35 percent of companies stored their logs for a year

or longer.

But as the survey suggests, there are still a number of issues related to the collection of log data

holding organizations back from realizing the benefits of log analysis. The sheer amount of

information needing to be sorted, organized and searched is overwhelming, and tools aren’t

simplifying this process well enough. In the first place, there’s the problem of how to collect the

data from the various clients and programs into the log management system. While this is being

done a number of different ways, the two primary options are to have the log generating device

Page 37: Siem & log management

37 | P a g e

send data to a log server or have the log server periodically pick the data up. Some systems are

doing this well, and logging is straightforward, while other applications present log data in ways

that can’t be collected and stored easily, and still other applications don’t even have logs.

MetaSploit Pro Gets SIEM, Cloud Integration

Rapid7's new MetaSploit Pro release, 4.0 (Jul 26, 2011), automates more workflow tasks.

A new version of the commercial MetaSploit penetration-testing product arrived today that

integrates the tool with SIEM systems, offers cloud-based penetration testing, more

automation, and operates more tightly with vulnerability assessment and management tools.

Rapid7's MetaSploit Pro 4.0 is another step toward the goal of making penetration testing more

user-friendly and integrated with other security tools and processes: Part of that strategy is

automating many of the workflow operations the pen-testing tool provides so organizations can

execute more widespread and frequent tests. The new version of the platform also now can be

integrated with vulnerability assessment and Web application scanning tools.

HD Moore, chief security officer at Rapid7 and chief architect of the MetaSploit platform, says

there was major demand for MetaSploit Pro to provide more automation. "And what surprised

Page 38: Siem & log management

38 | P a g e

us was such a huge demand for SCADA exploits," he says. The new version boasts nine new

SCADA exploits, according to Moore.

Version 4 also supports pen-testing from both public cloud infrastructures and private clouds.

MetaSploit can be run from Amazon's EC2 service as an Amazon Machine Image. "It lets you run

a large-scale arbitrary phishing campaign that's not from your own [IP]," Moore says. "That

makes phishing campaigns look more realistic, [for example]."

Moore says MetaSploit Pro 4.0 also provides direct product-to-product integration with Rapid7's

NeXpose vulnerability management product.

Other new features include automated verification of vulnerabilities and reporting, support for

VMware vSphere, automated cracking of encrypted passwords offline, and the ability to pull

pen-test reports from MetaSploit Pro in an XML format.

Syslog servers

Syslog servers are the de facto standard for sending data to a log server. In this year’s survey, 75

percent of respondents who had log servers indicated that they are running syslog servers. This

is up slightly from 72 percent in 2007. The standard syslog protocol is not perfect but it is widely

used. However, some applications and operating systems still fail to support syslog servers.

Windows file servers are one example of a server that does not support syslog. Windows event

logs can be picked up by a log server using the Windows Management Instrumentation

Command-line (wmic).

One of the issues with syslog is that it uses UDP for communication. Since UDP is a

connectionless protocol, it is relatively simple to spoof syslog messages. Another issue is that by

default, encrypting syslog traffic is not supported. One possible alternative is syslog-ng which is

an upgrade to the standard syslog that addresses these issues. Some sites will be able to make

that option work for them. RFC 3164 defines the syslog standard. Syslog-ng is more

complicated to set up than the older syslog protocol, and support from hardware and software

manufactures is not as widespread.

SNMP traps

SNMP traps are also used to send data to log servers. An SNMP trap is a message about an

event that a switch, router or other device would send out. Many syslog servers will also accept

SNMP traps. SNMP traps have the issues of spoofing messages and messages being

unencrypted. RFC 1157 and RFC 1215 define the SNMP standard. Newer versions of the SNMP

protocol make some effort to resolve these issues but at the expense of simplicity and wide

support. Some log servers use ftp or file shares to retrieve log data. There are also agents that

can run on a Windows server to forward logs to a log server. Some of these agents are

proprietary to the logs server and some are standards-based and send the event log data to a

Page 39: Siem & log management

39 | P a g e

syslog server. One example of a standards-based logging agent that would run on Windows

server is the publicly available Snare agent.

Command-line search through Event Logs

Here is a simple command-line that will retrieve the NT Event log information from a Windows

2003 server:

wmic /node:192.168.1.195 /user:administrator /password:password ntevent where (message

like “%logon%”)

In this example, the word logon is a variable that’s being searched for. It might be interesting to

search for “fail” or “cmd” or perhaps a username that should not be logging on. Searching

through the event logs with a command-line utility can be more powerful than the graphical

event viewer that is included

CAUTION: This can run for a long time before it returns results (could be 30 minutes or longer

depending on size of logs). This command can also impact network traffic CPU utilization on the

requesting computer. Test in lab environment before attempting it on production equipment.

Analyzing Logs After collecting the data, searching (44 percent) and reporting (42 percent) are the most

problematic areas of log management, based on responses. Complaints about both of these

options were mentioned in an open-text comment option at the end of the survey, in which

respondents said they would like to see improvements in alerting, querying, reporting, and

correlating to alleviate problems associated with analyzing their disparate sources of log data.

This could be a lifecycle issue, with more mature groups having achieved log data collection to a

satisfactory level (49 percent), and the rest still trying to collect log data in the first place.

If software logs can start agreeing to say things the same way, that will make log analysis much

more effective, which in turn should enhance security and system management. In last year’s

log management report, we noted that MITRE is standardizing the way log data events are

expressed through a Common Event Expression (CEE) – a standard log language for event

interoperability.

The syntax, transport and taxonomy specifications are underway, as MITRE works with the Open

Group’s Distributed Audit Service (XDAS) to deliver its first taxonomy in the August-September

08, timeframe.

While we wait for vendors to make changes to the way their applications, operating systems and

appliances express their log events, the next best way to handle the data is through

normalization done through the log management system. Many log servers have filters and

Page 40: Siem & log management

40 | P a g e

processors that will normalize data from disparate systems so that events can be detected

across routers, firewalls servers and applications with their individual ways of reporting an

event. They by no means catch all logs at this time, since proprietary applications will always be

problematic. But they correlate enough logs from firewall, IDS, operating systems and other

pervasive applications to get a good snapshot of events in their entirety.

Sharing Log Data and Chain of Custody

Behind collecting, searching and reporting on log data, respondents also had issues with sharing

log data and maintaining chain of custody information on log data. Twenty-nine percent of

respondents stated that the whole lifecycle of log data was a critical problem, and 50 percent

said that it was an important problem. Looking at the Global 2000 companies, the number is a

little different. Among the Global 2000, 41 percent indicated that the whole lifecycle was a

critical problem and 45 percent indicated that it was an important problem. Sharing log data

and chain of custody issues will become more important as the log management industry

matures and IT personnel begin getting more of the data that they need.

In the 2008 survey, 28 percent of respondents indicated that they provide log data to other

departments or executives. This is down from 37 percent in 2007. According to survey

feedback, what data is being shared is being provided to security personnel and management.

Only five percent of respondents indicated user activity monitoring is of top importance. In one

case, the information was being used for billing – an interesting notion when you consider the IT

department is often seen as a cost center – and functions within IT like log management are

well-suited for supporting such applications. If IT can develop a business support mindset, it

may be possible to identify ways for log management to provide a financial return to the

company. One familiar example is in monitoring web traffic. If visitors to a web site can be

identified geographically, it may indicate an interest that could be targeted for development.

Application licensing is another profit-driven use of log data that was mentioned by

administrators.

Storage and Encryption

This year SANS posed a new question asking if centralized log management should be able to

store log data encrypted. The answer was a resounding yes, with 51 percent in favor, 29

percent against, and the remainder undecided. This is in keeping with industry events making

full disk encryption quicker and easier with minimal performance impact, and the ability of most

commercial log management systems to store large amounts of data through a variety of

means.

Page 41: Siem & log management

41 | P a g e

What is event log management software (ELMS)?

Event log management software (ELMS) is an application used to monitor change management

and prepare for compliance audits at enterprises. “ELMS” is a key tool for IT administrators that

must demonstrate to executives that an organization is prepared for a compliance audit. The

Sarbanes-Oxley Act (SOX) and Health Insurance Portability and Accountability Act (HIPAA)

specifically require that public companies or those that handle personal health information

monitor or retain audit trails.

In addition to basic features like automated tracking of backups or user account creation of

deletion, new versions of event log management software feature monitoring and real-time

notifications and complex reporting capabilities. IT administrators can instantly be informed of a

potential security breach, like a former employee trying to delete important data or recover

deleted data. A critical feature in event log management software is nonrepudiation.

Compliance auditors require data generated directly from an application to prove that proof log

records were not manipulated, removed or modified.

Page 42: Siem & log management

42 | P a g e

Log Collection & Usage Basic Methodology

Page 43: Siem & log management

43 | P a g e

Page 44: Siem & log management

44 | P a g e

Page 45: Siem & log management

45 | P a g e

Data Collection and Flow

Page 46: Siem & log management

46 | P a g e

Page 47: Siem & log management

47 | P a g e

Log Management Key Features and Technology

The deployment of Log Management architecture generally involves the following steps:

Step 1: Define the requirement and goals. Needs can be security log analysis, application problem

analysis, or reporting for the purposes of regulatory compliance.

Step 2: Define the logging framework, log types, and system specification where logs are generated.

Step 3: Determine what you’re going to use log management for according to your goals. Are you

going to collect the logs? Maybe you need to analyze or even report and monitor the logs on remote

machine. If you plan on collecting log data, how long will it need to be archived? Is it going to be

encrypted? Regulatory compliance may provide specification for such needs.

Step 4: What information and intelligence are you planning to extract out of your log? End user

patterns reports, application problems and more can be taken.

Step 5: Evaluate technology and vendors solution to select the best fit to your needs. You may also

select to build a log management solution internally, leveraging open source solutions. Add a

reporting and analysis layer later on for intelligence.

Log Management Deployment life-cycle

One view of assessing the maturity of an organization in terms of the deployment of log-management

tools might use successive categories such as:

Page 48: Siem & log management

48 | P a g e

Level 1: in the initial stages, organizations use different log-analyzers for analyzing the logs in the

devices on the security-perimeter. They aim to identify the patterns of attack on the perimeter

infrastructure of the organization.

Level 2: with increased use of integrated computing, organizations mandate logs to identify the

access and usage of confidential data within the security-perimeter.

Level 3: at the next level of maturity, the log analyzer can track and monitor the performance and

availability of systems at the level of the enterprise — especially of those information-assets whose

availability organizations regard as vital.

Level 4: organizations integrate the logs of various business-applications into an enterprise log

manager for better value proposition.

Level 5: organizations merge the physical-access monitoring and the logical-access monitoring into a

single view.

Determining Organizational Requirements – Basic Methodology 1. The Organization must inspect the regulations which it must comply to

2. The Organization must define the scope of the systems and devices to be watched,

logged and correlated

3. The log retention time per each system

4. The Events Per Second (EPS) per system in normal and peak times

5. Defining Storage Size Requirements and inspecting current resources vs. possible

solutions and costs

6. Researching for the right solution and integration company vs. budget and log sizes

7. The current budget vs. the required budget

8. Defining the Management Budget update request presentation

If budget is not approved, falling back to the affordable solution

List/Mapping of Log Generating Devices

Which Systems create log files?

1. Every Operating System creates log files

2. Most common systems are Windows or Linux/UNIX Based.

3. Most “Devices” with custom operating systems are Linux/UNIX Based and

therefore support at least “syslog”

4. This means most logs would be of syslog and Windows Event Log format

Page 49: Siem & log management

49 | P a g e

5. Webservers such as Apache and IIS has their own format (IIS, W3C, NCSA,

Apache)

What kind of information is in log files by default?

1. Most systems log only basic information that allows only a basic understanding

of the event

2. In a case of a Security Incident which would require a forensic investigation, the

information stored in log files by the default settings usually would be sufficient

to support a valid investigation

3. Log files must contain origin and correlation assisting information but not

sensitive information. For example, log files must contain:

Username

IP

Exact Date and Time

Source/Target Hostnames

Application Name / User Agent

Application Full Executable Path

Screen/Document Referrer

Sender From Address/Account

Sender To Address/Account

Message/Mail Subject

Attachment/Upload filenames and MIME type

Log files must NOT contain:

Passwords

Credit Card Holder Information (Card Number, Expiration, PIN, CSC,

Address)

Information within User Privacy Borders, which is anything besides

Meta Data (Document Data, Email Content, Attached/Uploaded File

Contents and etc…)

Which information can be defined to appear in log files?

1. The basic information of a Linux system contains the events reported by each

service, for example: (auth, authpriv, daemon, cron, ftp, lpr, kern, mail, news,

syslog, user, uucp, local0, local7).

Each event is assigned a severity (Emergency, Alert, Critical, Error,

Warning, Notice, Info or Debug

2. The basic information of a Windows system contains the events reported by

each Application, System Service and Security Authentication Mechanism

Page 50: Siem & log management

50 | P a g e

Event Type, Date, Time, Source, Category, EventID, Level, Task, Opcode,

and Keywords

3. Most systems allow most of the common fields/information to be

defined/added to a customized log

4. Some information you may require for a valid forensic investigation will not be

supported for logging by the operating system or application

5. In these cases, an agentless SIEM solution should connect to the machine and

execute a local script to get/sniff the data and log it. While an agent-based

SIEM’s agent should be able to connect to that application/data stream and

fetch the data

6. In most of these cases, manual intervention is required and the SIEM vendor

creates a new/upgraded connector for the targeted system or the SIEM’s

integrator develops a workaround

System Sizing

Why and how to calculate your Events per Second

Each time I ask a CISO, or a technological expert, on their number of events per second (EPS), I

receive each time the same “No idea. “, “A lot of events“, “EPS WTF?” answers. Most of actors

are not sensibilized and/or don’t understand the key design factor of EPS metrics during the

enumeration and scope design phase of a Log or Event Management (SIEM) project. Why are

EPS metrics so important?

EPS metrics usages

These EPS metrics will help you to determine and provide you responses to:

Acquire an appropriate Log or Event Management solution

Most of Log & Event Management vendors arguing that their products are supporting

thousands of events per second. And surely their products are designed to support this

number of EPS, and surely the vendor will ask questions about your EPS metrics. Most of

time, if it is not an appliance, Log & Event Management solutions are supported by

others tiers hardware and software’s. You will surely have a dedicated servers (with a

limited amount of CPU, RAM & NIC), a SAN storage connection (with a limited amount

of size, I/O, speed, etc.), an attached external database (with all their own critical

metrics), a backup solution, network bandwidth, etc. The EPS metrics will help you to

design a part of your architecture and determine a part of your costs (CAPEX / OPEX).

More EPS you will have more you will need a scalable and available architecture. If you

acquire a Log or Event Management appliance solution, you will be limited de-facto by

the vendor solution.

Page 51: Siem & log management

51 | P a g e

To not determine the EPS metrics during a Log or Event Management

solution acquisition process will surely make you acquire a solution how is oversized or

undersized in front of your real initial scope needs. But never forget, EPS rate is only one

factor to make the final selection of your Log or Event Management solution.

Respond appropriately to compliance’s and/or regulations

If you have compliance’s and/or regulations, how require Log & Event

Management retention policies, the EPS metrics will helps determine

your online and offline storage requirements. Your retention policies period are

indicated by compliance’s and/or regulations, but your storage requirements not. How

many Giga or Tera-bytes will you need to respond to your retention policies period?

Improve your Capacity Management

During you day to day operation of your Log & Event Management solution, your

storage requirements have to be monitored to ensure that the capacity meets current

and future business requirements in a cost-effective manner. EPS metrics, based on a

baseline, will help you to improve your application sizing, your performance

management and to create a Capacity Planning.

Depending on your EPS metrics, you will maybe have to redesign your technical

infrastructure by adding clustering concept to your SIEM solution, creating an out-of-

band network to deal with bandwidth limitations, etc.

Improve your Incident Management

Once you have an EPS baseline per device and/or per infrastructure, if you see an

abnormal variation in your event rate flow, it will maybe indicate your that an

unauthorized change has be done, or that a device has a misconfiguration, or that you

are maybe under attack.

Improve your Service Level Management

As MSSP (Managed Security Service Provider), if you determine with your customer,

during the scope definition, an EPS metrics baseline, it will be easier for you to include

EPS guaranties and/or limitations in the SLA. EPS metrics could be integrated in a SLA,

same as for network bandwidth, and include concepts such as “burstable EPS“, “Peak

EPS” and “EPS – 95th percentile“…

Provide some useful KPI’s

Once you have an EPS baseline, you will be able to gather some interesting KPI’s, for

examples, total audited events during a period of time, EPS versus correlated events,

etc.

EPS metrics definitions and methodology

The best definition of EPS metrics, I have read, are available in the SANS Whitepaper

“Benchmarking Security Information Event Management (SIEM)” published in February 2009. I

will do a recap of the metrics definitions and the methodologies on how to create your EPS

baseline.

Page 52: Siem & log management

52 | P a g e

They are two EPS metrics definitions:

Normal Events per second (NE): The NE metric will represent the normal number of

events usage time for a device, or for your Log or Event Management scope.

Peak Events per second (PE): The PE metric will represent the peak number of events

usage time for a device, or for your Log or Event Management scope. The PE represent

abnormal activities on devices you create temporary peaks of EPS, for example D.o.S,

ports scanning, mass SQL injections attempts, etc. PE metric is the more important and

should be used to determine your real EPS requirements.

Depending of the activities and your SIEM infrastructure, you will have these metrics for both

activities, NE and PE for Log Management, and NE and PE for Event Management. A Log

Management solution will have its own EPS limitations how are not the same as the Event

Management solution limitations. This case is depending on your future Log & Event

Management infrastructure, if you will have a Log management solution in front of the Event

Management solution, you will be able to filter out unnecessary events from the Log

Management solution to the Event Management solution. I really recommend you to split the

activities by dedicated solutions.

Also, to have valuable EPS metrics we recommend you to do analyze a period of 90 days of logs.

The analyzed logs should represent all your normal and peak activities. If you analyze only a

short period of time, your EPS metrics will surely not represent the truth.

Methodology:

Define your scope!

To define your initial scope, please ask you simple questions. What are your compliance

or regulation requirements how need to be in the Log Management scope ? What are

the initial “Use Cases“, or policies, you will monitor through the Event Management

solution, etc. The scope definition could be a dedicated blog post, so I will not explain

further on how to determine this scope.

Scope devices inventory

Identify and do an inventory of all devices how should be integrated into your Log or

Event Management scope. By your scope definition you will identify a certain number of

required devices, some of these devices are running the same technology (for example:

4 Check Point firewalls, 2 Apache Web servers, etc.). These identical devices don’t have

the same roles and activities, so they will surely have a different EPS metrics.

Identify logs location and required events

For each device, identify the logs location, the logs retention period and in these logs file

identify the required events to respond to the “Use Cases” or policies monitoring. In

Page 53: Siem & log management

53 | P a g e

case of Log Management, please log everything. For Event Management, if you will have

a Log Management solution in front of the Event Management solution, you will only

need certain logs patterns. Identify these logs patterns and extract them into dedicated

log files. Event Management is not to log everything, don’t consider your SIEM solution

as a long term storage solution, the long term storage role is for Log Management.

You will then probably have 1 original log file for the Log Management scope, and

one deviated log file for the Event Management scope.

Identify NE and PE metrics for devices and get the PE grand total

Here come the logfu and mathematics things. You will need some shell skills to extract

all necessary information’s, and simple use Excel to analyze them.

Identify all your devices PE rates and sum all PE numbers to come up with a grand total

for your environment. It is unlikely that all devices in your scope will ever

simultaneously produce events at maximum rate.

Example of PE rate analysis

In this example:

Page 54: Siem & log management

54 | P a g e

We have an IDS exposed to Internet, and I will do some statistical analysis. We will analyses 1 month logs to determine the PE metrics for this device. First gather the number of events per day and calculate you average and median EPS per day (Number of events per day / 86400 seconds). In this example I have an average EPS rate of 0.03 and a median EPS rate also equal to 0.03. But as you can see I have 12 days how have an average EPS rate above 0.03, and I have also one average EPS peak rate of 0.08.

We will zoom on the 2011-04-10 how as an average EPS peak rate of 0.08, to determine the

exact average EPS peak rate for this day. The representation will be all events by minutes. We

can see that the PE is located between 09:42 PM and 09:59 PM. We can also find that our PE

rate, with a minute interval on the entire day, is now 6.27 (number of events per minutes /

60) and no more 0.08!

Page 55: Siem & log management

55 | P a g e

We will zoom in this time interval to identify more precisely our exact PE and we will represent

all events per seconds. We can see that the real PE rate is equal to 12 and not 6.27!

Page 56: Siem & log management

56 | P a g e

As described by this example, if you don’t analyze precisely logs, you will not able to determine

your exact NE and PE rate. The PE grand total rate is clearly not representing a real PE rate, but

will help you to not have a Log or Event Management solution how is undersized in term of EPS

limitations.

Why And How-to Calculate Your Events Log Size If you are projecting to start a Log or Event Management project, you will surely need to know

your Normal Event log size (NE). These Normal Event log size (NE) value, combined with the Normal

Events per second (NE) value and with your storage retention policy will help you to design in order

to estimate your storage requirements.

Never forget that Log Management storage requirements are not the same for Event

Management. Most of time Log Management storage requirements are higher than for Event

Management. For example for Log Management, PCI-DSS v2.0 Req. 10.7 requires 1 year

retention:

10.7 Retain audit trail history for at least one year, with a minimum of three months

immediately available for analysis (for example, online, archived, or restorable from back-up).

But in order to compensate PCI-DSS v2.0 Req. 10.6, you will maybe do Event Management with

a SIEM (like ArcSight ESM, RSA enVision, QRadar SIEM, etc.).

Page 57: Siem & log management

57 | P a g e

10.6 Review logs for all system components at least daily. Log reviews must include those

servers that perform security functions like intrusion-detection system (IDS) and authentication,

authorization, and accounting protocol (AAA) servers (for example, RADIUS). Note: Log

harvesting, parsing, and alerting tools may be used to meet compliance with Requirement 10.6

You don’t need a SIEM to do Log Management, but you also don’t need to store 1 year of your

logs on your SIEM solution. Long term retention, long term reporting, “raw” events forensics is

mostly done on a Log Management infrastructure (like ArcSight Logger, Qradar Log

Manager, Novell Sentinel Log Manager, etc.). Storage retention for your Event Management

infrastructure will depend mostly on your correlation rules, your acknowledge time on a

correlated event, the number of security analysts present in your SOC, etc.

Don’t imagine that a magic formula exists to define your events log size, some tools could help

you, but you need to analyze your logs in order to have your Normal Event log size. First of all you

have to define your Log and/or Event Management scope, this scope could first be driven by

regulations or compliances, but don’t forget that regulations or compliances are not

Security. Also each technology have different log sizes, an Apache HTTPD log will not have the

same size than a SSHD log, and an Apache HTTPD log from server A will surely not have the

same size than an Apache HTTPD log from server B.

xxx.xxx.xxx.xxx - - [25/Aug/2011:04:23:47 +0200] "GET /feed/ HTTP/1.1" 304 - "-" "Apple-

PubSub/65.28"

This log from Apache HTTPD server A has a size of 102 bytes.

xxx.xxx.xxx.xxx - - [25/Aug/2011:04:15:08 +0200] "GET /wp-content/themes/mystique/css/style-

green.css?ver=3.0.7 HTTP/1.1" 200 1326 "http://eromang.zataz.com/" "Mozilla/5.0 (Windows;

U; Windows NT 5.1; en-US; rv:1.9.2.20) Gecko/20110803 Firefox/3.6.20 ( .NET CLR 3.5.30729)"

This log from Apache HTTPD server B has a size of 274 bytes.

Also, depends the Log or Event Management infrastructure product, you need to consider event

generated by intrinsically mechanism. For example, in order to search in your events most of

products are creating indexes; these indexes are representing an average of twice the time of

the size of the event. Also another intrinsically mechanism is that these products are also

monitoring themselves, regularly executing tasks, do some statistics for dashboards or reports.

A bash script was developed to that permits analyzing all your archived logs and gather the

following information:

For each archived files, the total number of events, the total uncompressed size of the events,

the Normal Event log size.

The total events for all archived files.

Page 58: Siem & log management

58 | P a g e

The total uncompressed size of all events in all archived files.

The grant total Normal Event log size.

The average event number per archived files.

The average bytes per archived file.

A script to calculate the event log size can be found at:

http://eromang.zataz.com/uploads/event_log_size_calculator.sh

Key Differentiators between SIEM Products Shortly, we will be looking at specific vendor solutions. How do we narrow the field down and

begin choosing our solution? In any evaluation there are certain key factors which separate one

solution from another. Of course, these can be very different depending upon the organization;

however I believe a few key issues will help most evaluations. As you are going through the

process, make your own list of key differentiators. This information will be very helpful during

the evaluation process and make sure you get the most out of your dealings with vendor sales

and technical staff.

FIREWALL AND INTRUSTION PREVENTION/DETECTION SUPPORT

Any solution which does not support your organizations firewalls and intrusion

detection/prevention systems should not be considered. These devices are the basis for an

organizations network security and support is simply a must.

SERVER OS SUPPORT

Does a solution support the servers deployed in your organizations? This is especially important

for those shops running Novell as support appears to be limited. Linux/Unix support is typically

available, but there may be caveats as to support beyond login/logoff activity. As expected,

support for Microsoft Windows servers is built into nearly every system; however, how this

support is implemented is key to the selection process. Windows does not support syslog

natively; therefore most systems require one of the three following options: the snare client, a

proprietary client or agent less log pulls. Let’s take a closer look at each of these options.

The snare client is used by many SIM vendors to collect Windows server logs. Snare is an open

source product which converts windows event log data into syslog output. Snare has the

advantage of being widely used and, being open source, is free. Unfortunately, some may be

leery about relying on an open source product for enterprise log collection. When issues arise,

who do you go to for support and will the SIM vendor fully support snare?

A proprietary client is another option for Windows log collection. Typically a vendor decides to

write their own agent to avoid the concerns and issues involved with SNARE. Often these clients

continue the same process of converting windows event logs into syslog output. Proprietary

Page 59: Siem & log management

59 | P a g e

clients have the advantage of providing one source to go to for support issues. However, they

still require the installation of additional software into an organizations server infrastructure.

Clientless log pulls is the third option in this discussion. At first glance this option seems to be

the clear winner. Who wouldn’t want agent less log pulls? No software to install and shorter

implementation times. Of course, clientless agent pulls have their disadvantages as well.

Typically, clientless agent pulls are usually only available on SIM solutions which run Windows

themselves, which can severely narrow the options available. Also, scalability can become a

concern in this configuration as logs are pulled using NetBIOS. This can be very system intensive

for large log volumes and slow across WAN connections.

There is no definitive answer to this Windows logging dilemma. However, with careful thought,

testing and discussion your organization can make the correct decision regarding Windows

logging.

LOG COLLECTION AND MANAGEMENT At its base, SIM starts with simply collecting logs from a variety of sources and, yet, there still

can be very different options, even in this area. One key issue for many organizations, especially

those who expect to have this tool help with forensics, is the availability of logs in their raw

format. Raw format refers to the log file in its original state. Most SIM solutions will normalize

logs from their original state for correlation and storage purposes. Another similar issue to

consider is what options are there for exporting logs and, if available, what format are the logs

exported to? Think about situations where you may need to provide log data to another

system, department or group. Will the SIM meet these needs?

Backup and restore is another log management issue to address during your evaluation.

Needless to say, your SIM solution will require large amounts of data. What are your

requirements to backup and restore this data? Many SIM systems run on very proprietary

backend databases. Is there an efficient method for performing backups and restores? How

long do you want to keep a copy of your logs and in what format?

When doing system evaluations take a close look at what options a user has for searching

through events. Some vendors will limit your ability to customize searches in order to gain

efficiency and performance. Make sure that your search needs will be met.

RULES

Rules are the basis for how logs are correlated into an incident or offense. When evaluating SIM

solutions you should consider three main issues relating to rules: the number of vendor

provided rules, the quality of these rules and capabilities for creating custom rules. Every

vendor will provide a certain subset of pre-defined rules. The number of rules and their

capabilities will vary greatly so take a good look when comparing solutions. For example, it is

Page 60: Siem & log management

60 | P a g e

pretty easy for a vendor to write a rule looking for failed authentication attempts. However, a

much more involved and useful rule might look for failed login attempts spread out over a long

period of time, across several systems, from a similar source address.

Pre-defined rules are important, but I believe the ability to create custom rules is even more

important. No organization has the same assets or security needs, therefore the ability to

create rules specific to your organization’s needs is important. Try creating a custom rule and

see how easy the process is and if the interface limits your choices. All vendors will say you have

the ability to create custom rules, but what does that really mean? Can you completely define

the parameters of your rule or are you limited by their predefined templates? Do you need to

be a programmer to write effective rules or is there an easy to learn system?

INCIDENT MANAGEMENT

Once an alert has been generated for a particular issue, the responding individual will begin the

organization’s incident response process. SIM systems often include some form of incident or

ticket management system to aid in this process. Typically these systems are much more limited

in scope than full blown Help Desk software; however they can still be very useful in organizing

and prioritizing alerts and documenting your response. Larger organizations may also wish to

look at options for integrating alerts from the SIM into their existing Incident Management

System. During your evaluation, think about the process you will follow to take an alert

through the incident response process and how your SIM can assist in this process.

REPORTING Reporting is a key differentiator for many SIM decisions, especially with compliance often being

a key reason for pursuing SIM. “Compliance has become the principal driver in the deployment

of SIM technologies, says Scott Crawford, a senior analyst in the Enterprise Management

Associates security and risk management group. Whether it's to meet regulatory standards or

merely to satisfy internal policies, enterprises have expanded their use of SIM tools to embrace

what is increasingly called governance/risk/compliance (GRC) management, he explains.” (Carr

2007) All vendors will say their solution can solve all our compliance needs, but as with all

vendor claims; SOX, HIPAA and GLB related compliance reporting get all the hype when it comes

to a SIM, but consider how you could use the SIM for your operational reporting requirements

as well. For instance, we are using our SIM to develop web-based dashboard to track usage of

various IT systems across our organization.

Evaluating a reporting solution is very similar to evaluating rules. All systems come with certain

predefined reports and typically a method for creating a custom report. Make sure and test

custom report creation during your evaluation. Think about the presentation of these reports

both in paper and electronic format. Imagine if you had all this data in one location and were

not able to appropriately report back to senior management. Also, SIM solutions can cost

significant dollars, and upper management may be skeptical of the purchase, especially in

Page 61: Siem & log management

61 | P a g e

smaller organizations. Proper follow-up via reporting will help justify the purchase and make

future budget proposals that much easier.

COMPANY STABILITY

As mentioned above, the SIM market is a very crowded place which most agree is ripe for

continued consolidation. Larger security companies without a SIM solution or looking to

improve their existing product may decide that purchasing one of the smaller vendors is a better

idea than developing their own solution. “But that was before big vendors began buying their

way in the market, snapping up smaller SIM tools vendors left and right. Major SIM deals have

included EMC’s September acquisition of Network Intelligence, IBM’s purchase of Consul and

Micromuse, and Novell’s buyout of eSecurity.” (McLaughlin, 2007)

The risks of purchasing a product who later is bought by a larger company are fairly

straightforward; worries about continued support, migration paths, increased maintenance

costs, degrading customer service. One of the more complicated and potentially dangerous

issues is your deployed ruleset.

As stated by Mark Bruck, president of BAI Security, “You can invest a lot or resources and time

into tweaking the systems and developing rules around correlating events and triggers for

specific types of events, but after an acquisition, all this work can go down the drain because

there aren’t always clear migration paths from one vendor to another, and your system may not

be as functional”. (McLaughlin, 2007)

The point to this section is not to scare you away from smaller company offerings, but rather to

make sure that you understand the risks involved with the marketplace. Smaller companies can

have several benefits as well such as more personalized service, increased flexibility and

potentially less costly solutions. The key is to understand the risks in the marketplace, your

organization’s tolerance for such risks and make the correct decision.

Page 62: Siem & log management

62 | P a g e

Complying with Laws and regulations

SOC, HIPPA, PCI-DSS, ISO 27001/2

PCI DSS Compliance - Periodic Operational Task Summary

The following chapter contains a summary of operational tasks related to logging and log review. Some of the tasks are described in detail in the document above; others are auxiliary tasks needed for successful implementation of PCI DSS log review program.

Daily Tasks

The table below contains daily tasks, responsible role that performs them as well as what record or evidence is created of their execution:

Task Responsible Role Evidence

Review all the types of logs produced over the last day as described in the daily log

review procedures

Security administrator, security analyst, (if

authorized) application administrator

Record of reports being run on a log management tool

(As needed) investigate the anomalous log entries as

described in the investigative procedures

Security administrator, security analyst, (if

authorized) application administrator

Recorded logbook entries for investigated events

(As needed) take actions as needed to mitigate,

remediate or reconcile the results of the investigations

Security administrator, security analyst, (if

authorized) application administrator, other parties

Recorded logbook entries for investigated events and

taken actions

Verify that logging is taking place across all in-scope

applications Application administrator

Create a spreadsheet to record such activities for

future assessment

(As needed) enabled logging if disabled or stopped

Application administrator Create a spreadsheet to record such activities for

future assessment

Weekly Tasks

The table below contains weekly tasks, responsible role that performs them well as what record or evidence is created of their execution:

Task Responsible Party Evidence

Page 63: Siem & log management

63 | P a g e

(If approved by a QSA) Review all the types of logs

produced on less critical application over the last day as described in the daily log

review procedures

Security administrator, security analyst, (if

authorized) application administrator

· Record of reports being run on a log management tool.

· Record of QSA approval for less frequent log reviews

and reasons for such approval

(As needed) investigate the anomalous log entries as

described in the investigative procedures

Security administrator, security analyst, (if

authorized) application administrator

Recorded logbook entries for investigated events

(As needed) take actions as needed to mitigate,

remediate or reconcile the results of the investigations

Security administrator, security analyst, (if

authorized) application administrator, other parties

Recorded logbook entries for investigated events and

taken actions

Monthly Tasks

The table below contains daily tasks, responsible role that performs them as well as what record or evidence is created of their execution:

Task Responsible Party Evidence

Prepare a report on investigated log entries

Security analyst, security manager

Prepared report (to be filed)

Report on observed log message types

Security analyst, security manager

Prepared report (to be filed)

Report on observed NEW log message types

Security analyst, security manager

Prepared report (to be filed)

(If approved by a QSA) Review all the types of logs

produced on non-critical applications over the last

day as described in the daily log review procedures

Security administrator, security analyst, (if

authorized) application administrator

· Record of reports being run on a log management tool.

· Record of QSA approval for less frequent log reviews

and reasons for such approval

(As needed) investigate the anomalous log entries as

described in the investigative procedures

Security administrator, security analyst, (if

authorized) application administrator

Recorded logbook entries for investigated events

(As needed) take actions as needed to mitigate,

Security administrator, security analyst, (if

Recorded logbook entries for investigated events and

Page 64: Siem & log management

64 | P a g e

remediate or reconcile the results of the investigations

authorized) application administrator, other parties

taken actions

Quarterly Tasks

The table below contains daily tasks, who performs them as well as what record or evidence is created of their execution:

Task Responsible Party Evidence

Verify that all the system in scope for PCI are logging and that logs are being reviewed

Security analyst, security manager

Recorded logbook entries for review and exception

follow-up

Review daily log review procedures

Security analyst, security manager

Updates to logging procedures;

change log

Review log investigation procedures

Security analyst, security manager

Updates to logging procedures;

change log

Review collected compliance evidence

Security analyst, security manager

Compliance evidence; evidence

review log

Review compliance evidence collection procedures

Security analyst, security manager

Updates to procedures; change

log

Annual Tasks

The table below contains daily tasks, who performs them as well as what record or evidence is created of their execution:

Task Responsible Party Evidence

Review logging and log review policy

CSO Policy changes; change log;

policy review meeting minutes

Review compliance evidence before the QSA assessment

PCI DSS compliance project

owner

Meeting minutes or other

record

Page 65: Siem & log management

65 | P a g e

Live tests with anomalies As needed

Logs or other records of such

tests

Implementing the 20 Critical Controls with Security Information

and Event Management (SIEM) Systems

Abstract and Introduction There is much debate over the reality of cyber war, cyber espionage; the advanced persistent

threat (APT).The reality of the current situation is that there are those who intend to violate the

confidentiality, integrity or availability of critical data sets. The threat is real, causing damage to

systems and leakage of valuable data and it is difficult to defend against.

These controls were updated in April 2011 to reflect changes in the threat landscape and to

introduce additional controls that have been found to be most effective at stopping today’s

threats. For example, left to operate in silos, security systems take more effort to maintain and

potentially leave the organization vulnerable to gaps in their correlation. So an important new

recommendation in the April update is the listing of security information and event

management (SIEM) systems as a necessary control for visibility and attribution. Because one of

the key goals of these 20 critical controls is automation, there must be a central “brain” that can

synthesize raw security data feeds: This is the primary function of a SIEM.

As organizations have begun to implement the 20 controls, they are looking for practical

guidance on where to start and how to achieve best results in their implementations. This

involves prioritizing controls against systems already in place and ultimately implementing a

central hub for processing and correlating data sets from information security tools.

Goals and Philosophies of the Top 20 Critical Controls Overall, the 20 critical controls have four main philosophies for implementing security controls

that will combat the most significant threats to information systems. The guidelines, posted at

SANS.org,

describes these philosophies as:

Defenses should focus on addressing the most common and damaging attack activities

occurring today and those anticipated in the near future.

Enterprise environments must ensure consistent controls across an enterprise to

effectively negate attacks.

Page 66: Siem & log management

66 | P a g e

Defenses should be automated where possible and periodically or continuously

measured using automated measurement techniques where feasible.

To address current attacks occurring on a frequent basis against numerous

organizations, a variety of specific technical activities should be undertaken to produce a

more consistent defense.

To truly understand the controls, IT managers must read them in light of these four

philosophies. There are two controls that are of interest specifically for this discussion:

• “Defenses should focus on addressing the most common and damaging attack

activities occurring today, and those anticipated in the near future”

• “Defenses should be automated where possible, and periodically or continuously

measured using automated measurement techniques where feasible”

Most importantly, these controls must be automated in order to reduce mistakes and

complexity. An organization cannot rely on people to perform these tasks manually, yet this is

precisely the situation organizations find themselves in today.

When individuals are left to collect and interpret data manually and respond to threats or

implement appropriate defensive controls, they often fail in their efforts, leaving systems

they’re supposed to be protecting vulnerable to attack. Common security principles teach us

that if there is a weakness (vulnerability) in a system, then that weakness should be repaired so

it no longer represents a risk. With manually implemented controls, the weaknesses are the

humans tasked with the responsibility of correlating, interpreting, and responding to threats. An

upgraded, automated system implemented and managed correctly, can remove these human-

based weaknesses and system control failures. This does not mean those machines will be

infallible. Over time humans will need to fine-tune systems to better interpret and respond to

the threats that are discovered. But this is the direction and philosophy that must be

entertained.

Evaluating the Relationship between the Controls To better understand which approach to take to implement the controls, organizations should

consider how the controls interact with each other. Such an undertaking helps better formulate

a strategy for implementation. As with planning any other project, begin with the project’s end

goals in mind before taking the first implementation steps.

A good place to start would be evaluating the overlaps between multiple projects, where

resources can be prioritized and reused and efficiencies can be discovered. For instance, imagine

an organization was planning on implementing a payroll system and a sales system. Most likely

each of those technical systems would require authentication and access control management

in order to ensure that only authorized individuals were able to access the information in each

system.

Page 67: Siem & log management

67 | P a g e

One approach would be to have each project team implement its own authentication database

(resulting in separate databases for the payroll system and the sales system).The second, more

efficient, option would be for the teams to utilize a common authentication database that could

be extended to future projects as well.

If an organization has a high-level view of all the controls, they would be able to identify

overlapping system requirements between each of the 20 controls. One way that application

developers often accomplish this task is by using a tool known as the Unified Modeling Language

(UML).

The Object Management Group (OMG) defines UML in this way: “The Unified Modeling

Language is a visual language for specifying, constructing, and documenting the artifacts of

systems. It is a general-purpose modeling language that can be used with all major object and

component methods, and that can be applied to all application domains (e.g., health, finance,

and telecom, aerospace) and implementation platforms.”

There are 14 different UML diagram types that are defined by the OMG, all of which can be

divided into two categories of diagrams — structural diagrams and behavioral diagrams. Entity

relationship diagrams (ERDs) or class diagrams are commonly used to define classes of objects

and the attributes associated with each class.

For the sake of this discussion, this concept has been translated from the development world to

be used by security engineers to create a holistic view of the 20 critical controls. Diagramming

the controls using this methodology reveals the commonalities and overlaps between each of

the controls. This will not only help improve efficiencies and reuse, it will also help identify high-

priority controls, which are foundational or prerequisite controls used for multiple purposes.

For example, in satisfying Critical Control 1, “Inventory of Authorized and Unauthorized

Devices,” you might develop an entity relationship diagram (ERD) similar to the one shown in

Figure 1.

Page 68: Siem & log management

68 | P a g e

A tool such as Microsoft Visio gives organizations the ability to include all of the objects

necessary to implement this control. Then, arrows can be used to illustrate the interaction

between each of the objects necessary for implementing the control. A security architect could

then perform this process for each of the 20 critical controls, which would show all of the

objects necessary to successfully implement each control. Once an ERD had been created for

each of the 20 critical controls, the architect could lay each of the ERDs side-by-side, as in Figure

2, to look for commonalities.

Page 69: Siem & log management

69 | P a g e

When the systems required to achieve the goals of each of the 20 critical controls are combined

into one ERD, it allows an organization to focus on those systems that are necessary to making

the entire program successful. The individual controls that are required for implementing

several of the 20 critical controls should be the priority for laying the foundation as a whole.

Therefore, these higher priority controls should be attempted prior to other controls.

As you look at the diagram above, organizations will discover that the following systems are

repeated Throughout the 20 critical controls:

• Security information and event management (SIEM) systems

• Security content automation protocol (SCAP) scanners

• Network traffic capture and monitoring sensors

• Anti-malware whitelisting or file integrity assessment tools

When an organization makes a plan to implement the 20 critical controls as a whole, SIEM

should be one of the first controls implemented. Vulnerability scanners (based on SCAP),

whitelisting tools, and other specific controls are important, but no specific control is more

Page 70: Siem & log management

70 | P a g e

relied upon for effective control implementation than a SIEM. That’s because SIEM can take

data from all these tools and more to help organizations understand their vulnerabilities detect

and troubleshoot security incidents and help improve security posture. The next section shows

how and where SIEM intersects with 15 of the top 20 controls calling for automation.

Critical Controls and SIEM Specifically, if you look at the first 15 controls (the controls that are most easily automated) one

at a time, you will see that SIEM products can quickly be made to interact with many of them, as

illustrated Table 1.

Page 71: Siem & log management

71 | P a g e

In short, a SIEM, when properly configured, has the capacity to become the central nervous

system of a network, collecting and processing data feeds that are given to it and therefore

meets many of the top 20 requirements for automation. This allows organizations to leverage an

intelligent, continuous monitoring system that can take responsibility for the securing networks

from the physical layer to the application layer 24/7/365.

Post-Implementation Value of SIEM in Light of the Controls The business benefits integrating SIEM with the 20 critical controls are significant. SIEM

implementations enable organizations to better manage risk, gather actionable security metrics,

and audit the effectiveness of their control implementations.

Risk Management

Regardless of the risk model that an organization subscribes to, they will always encounter two

common elements: vulnerabilities and threats. SIEMs can show an organization where the

vulnerabilities exist and whether they are being actively exploited by internal or external

threats. The consolidation and correlation of alerts from widely used security tools (firewalls,

intrusion detection/prevention tools, data leakage protection tools, and so on) with event logs

Page 72: Siem & log management

72 | P a g e

from base infrastructure (for example, hosts, databases and applications) enables SIEM to

detect the sophisticated and complex threats facing modern enterprises.

In addition, SIEM—when integrated into vulnerability, whitelisting, end point and security

monitoring systems—gives a better picture of the organization’s risk posture. In this context, the

controls are like a mirror that reflects the organization’s information security maturity. By

supporting the trending of security information, SIEM gives visibility into the organization’s risk

posture for ongoing improvements.

Ongoing, Actionable Metrics Built into version 2.3 and later versions of the 20 critical controls are a set of metrics meant to

help an organization to implement the critical controls. The metrics are written in such a way as

to measure how effectively an organization is meeting the spirit of the controls.

For example, the goal of Control 1 is to allow only authorized devices to connect and utilize the

organization’s network. Therefore, the metric and evaluation are to add unapproved devices to

the network, see if they are able to access resources, and measure how long it takes before

these devices are isolated and are no longer able to access the network. If the SIEM is

configured to keep an inventory of all authorized devices and information regarding new devices

is reported to the SIEM, then the SIEM should be able to record when these devices were first

discovered and how long they were actually on the network. Thus, the SIEM is able to record

metrics on how well an organization is meeting the goals of the controls. Ultimately this is

meant to facilitate continuous process improvement and help business owners establish

thresholds for levels of acceptable risk in the organization.

Auditing the Effectiveness of Controls If the above goals have been met, then this leads to a third post-implementation goal that SIEMs

have the opportunity to provide: assisting with effective auditing of the organization’s controls.

The hope is that by establishing risk thresholds and appropriate metrics based on the 20 critical

controls, auditors will be able to measure an organization on those aspects of risk that actually

impact the security of the systems.

Today many auditors know that how long unauthorized devices are on a network would be

valuable data to have, particularly if this device connection information can be gathered over a

long period of time. However, most auditors have no way to perform this type of assessment in

the scope of a given audit; therefore, although most would agree this information is important,

it is rarely if ever measured. A SIEM, properly configured to the 20 critical controls, can record

this information, save it over long periods of time, and report that information to auditors.

Auditors can then use this information to evaluate the risk level of the organization’s systems

and whether appropriate controls exist. Only after the SIEM is implemented does this

information become available.

Page 73: Siem & log management

73 | P a g e

Things to Keep in Mind Organizations that leverage SIEM to monitor the enforcement of the 20 critical controls can gain

value from their SIEM systems. However, here are some important things about SIEM to keep in

mind:

1. Most SIEM products have not been designed with the 20 critical controls in mind.

2. SIEMs are useful only if they are configured and able to handle all the relevant event data

feeds.

3. Consolidated data feeds are useful only if the SIEM can automate the business logic and

analysis required to detect sophisticated threats and evaluate risk.

4. Alerts from business analytics in any SIEM are only worth the investment if they are acted

upon.

SIEM that is truly adaptable to an organization’s business model and metrics will be the most

valuable.

Conclusion Security threats are dynamic in nature and exploits are constantly evolving as attackers grow

ever more organized, precise and persistent. Therefore, the controls used to protect

information systems also need to be dynamic enough to respond to the evolving nature of

today’s threats. This is the purpose of the 20 critical security controls.

No matter how the threats evolve, the collection and correlation of system, network, user, and

application activity will continue to play pivotal roles in the 20 critical controls guidelines. As

threats and security events evolve, SIEM vendors and the information security community must

work together to build relevant and actionable business analytics into their systems. By

continuously improving recommendations and the controls to support those recommendations,

SIEM products can become true information security hubs that not only automate audits, but

also provide proactive means to protect the organization.

If implemented properly, SIEM can provide the visibility that organizations need to trend and

improve their risk postures over time—which is the ultimate goal of the 20 critical controls. As

such, SIEM or SIEM-like technologies for centralization and consolidation of an organization’s

security data will continue to be important investments for organizations wanting to accurately

respond to threats and ultimately improve their risk and compliance postures.

Page 74: Siem & log management

74 | P a g e

Log Collection

Comprehensive Log Data Collection and Log Management

Being able to collect log data from across an enterprise regardless of their source, present the

logs in a uniform and consistent manner and manage the state, location and efficient access to

those logs is an essential element to any comprehensive Log Management and Log

Analysis solution. The Log collector solution was designed to address core log management needs

including:

The ability to collect any type of log data regardless of source

The ability to collect log data with or without installing an agent on the log source

device, system or application.

The ability to "normalize" any type of log data for more effective reporting and analysis

The ability to "scale-down" for small deployments and "scale-up" for extremely large

environments

An open architecture allowing direct and secure access to log data via third-party

analysis and reporting tools

A role based security model providing user accountability and access control

Automated archiving for secure long term retention

Wizard-based retrieval of any archived logs in seconds

Cross-platform Log Collection

Today's IT operations require many technologies; routers, firewalls, switches, file servers, and

applications to name a few. Log collector has been designed to collect from them all through

intelligent use of agent-less and agent-based techniques.

Windows Event Logs: Agent-less or Agent-based

Log collector can collect all types of Windows Event Logs with or without the use of an

agent. Log collector collects Event logs via secure TCP transmission. Many Windows-based

applications write their logs to the Application Event Log or a custom Event Log.

Examples of supported log sources that can be collected by Log collector in real time include:

Windows System Event Log

Windows Security Event Log

Windows Application Event Log

Microsoft Exchange Server application logs

Microsoft SQL Server application logs

Windows based ERP and CRM systems application logs

Syslog

Page 75: Siem & log management

75 | P a g e

Many log sources, including most network devices (e.g. routers, switches, firewalls) transmit

logs via Syslog. Log collector includes an integrated Syslog server for receiving and processing

these messages. Simply point any syslog generating device to Log collector and it will

automatically begin collecting and processing those logs.

Enable remote logging with syslog

Managing log files is a vital part of network administration. The syslog utility, which comes

standard with every Linux distribution, offers the ability to log both to local files as well as to a

remote system. This capability can be essential if you need to view log files on a compromised

machine, particularly if you aren't sure if an attacker has "scrubbed" (or cleaned) the log files to

hide evidence.

Setting up syslog to log remotely is simple. On the system for which you want to receive the log

entries, configure syslog to start with the -r option, which enables it to receive remote log

entries.

For example, on a Mandrake Linux system, edit the /etc/sysconfig/syslog file, and change the

SYSLOGD_OPTIONS parameter to the following:

SYSLOGD_OPTIONS="-r -m 0"

Next, restart the syslog service. You should also ensure that the firewall on that machine allows

access to UDP port 514 from the machines that will be sending it logs.

On the system for which you wish to send log entries, modify the /etc/syslog.conf file, and add

something similar to the following at the very bottom:

*.info @loghost.mydomain.com

This tells syslog to send all *.info level log entries to the host loghost.mydomain.com. You can

change which facilities you wish to remotely log, but *.info is generally sufficient. Restart syslog

on this machine as well, and ensure that the firewall allows sending from the local host to the

remote machine on UDP port 514.

Log entries from the one host should now appear on the remote host, mixed in with that host's

own logs. For instance, your log files may now look like this:

Jan 8 13:23:22 loghost fam[3627]: connect: Connection refused

Jan 8 13:23:24 remote.mydomain.com su(pam_unix)[3166]: session closed for user root

As you can see from this snippet of /var/log/messages, syslog logs information for both loghost

(the local machine) and remote.mydomain.com (the remote host) to the same file. At this point,

Page 76: Siem & log management

76 | P a g e

install a log-watching utility on the loghost to alert you to any particular issues you would like to

monitor (such as failed logins).

Flat File Logs

Log collector can collect logs written to any ASCII-based text file. Whether it is a commercial

system or homegrown application, Log collector can collect and manage them.

Examples of supported log sources using this method include:

Web servers logs (e.g. Apache, IIS)

Linux system logs

Windows ISA server logs

DNS and DHCP server logs

Host based intrusion detection/prevention systems

Homegrown application logs

Exchange message tracking logs

Universal Database Log Collection and Log Management

Since so much sensitive information resides in databases, it is important to monitor and track

access and activity surrounding important databases. The actual and reputational cost of a theft

of customer records can be very large. Log collector can help. Log collector collects, analyzes,

alerts, and reports on logs from all ODBC-compliant databases including Oracle, Microsoft SQL

Server, IBM DB2, Informix, MySQL, and others. It also captures data from custom audit logs and

applications that run on the database. This capability enables customer to use Log collector for

real-time database monitoring to guard against insider and outsider threats.

Agent-less and Agent-based collection

While most log sources can be collected by Log collector via agent-less methods, Log collector

also offers powerful, low profile agent technology for situations where they make

sense. Whether they are used for real-time flat file log collection or to aggregate and forward

logs from a remote site, Log collector agents are the perfect complement to any deployment.

Log collector agent features include:

Collection of any flat-file ascii text log in real time (e.g. web server and application logs)

Transmission over secure TCP

Ability to aggregate and forward logs from multiple sources from any remote site (e.g.

retail store, branch location).

Optional encryption during transmission

Ability to schedule transmission if needed (e.g. due to bandwidth constraints)

Page 77: Siem & log management

77 | P a g e

File-integrity monitoring

Collection load-balancing for distributed deployments

Scalable Log Centralization

Log collector is architected to scale easily and incrementally as your needs grow. Whether you

need to collect 10 million or more than 1 billion logs per day, Log collector can handle it. With

Some solutions, you simply deploy the capacity you need when you need it, preserving your

initial investment along the way. Deployments can start with a single, turnkey appliance and

grow easily by adding incremental log manager appliances as needs expand. With Log

collector’s “building blocks” distributed architecture, you can access and analyze logs

throughout your deployment with ease.

Log Archiving and Retrieval

Many businesses have compliance requirements to preserve historic log data and be able to

provide it in its original form for legal or investigative purposes. Collecting, maintaining and

recovering historic log data can be expensive and difficult. Imagine trying to recover logs from a

specific server two years ago. Were the logs archived or saved anywhere. If so, where have the

logs been stored? What format are they in? Can the correct archived log files be identified

among the tens of thousands (or millions) of other archive files…in a reasonable period of

time? With Some solutions, the answers to these questions are easy.

Log collector completely automates the process of archiving and restoring log data. Log

collector automatically archives unaltered log data to “sealed” self-describing files that are

saved, organized and tracked by the system. Archive files can be saved on Log collector

appliances or any network storage device you choose. Log collector uses a SHA-1 hash and

compresses the logs in a non-proprietary format to protect log integrity. Compression typically

results in a 95% reduction in storage requirements and associated cost. Archive files also

include 'bookkeeping' information such as where and when the log data originated and other

key characteristics.

Recovering historic logs is a snap. The Archive Restoration Wizard makes it easy to restore

based on specific filtering criteria like date, user, system, etc. Hit start and log collector takes

care of the rest. Once restored, log data can be analyzed using standard log collector analysis

tools. What could have been weeks’ worth of effort becomes minutes with a log collector.

Activity Auditing

For compliance verification, users’ and administrators’ actions within Log collector are

logged. Log collector user activity reports provide powerful proof that Log collector is actively

used to analyze log data for compliance purposes.

Page 78: Siem & log management

78 | P a g e

How to Collect Log Files According to the Law Requirements

Snare Agent for Windows

Snare for Windows is a Windows NT, Windows 2000, Windows XP, and Windows 2003

compatible service that interacts with the underlying Windows EventLog subsystem to facilitate

remote, real-time transfer of event log information. Snare for Windows also support 64 bit

versions of Windows (X64 and IA64).

Snare for Windows Vista is a Windows 2008, Vista and Windows 7 compatible service that

interacts with the underlying "Crimson" EventLog subsystem to facilitate remote, real-time

transfer of event log information. Snare for Windows Vista also support 64 bit versions of

Windows (X64).

These two agents have now been combined into a single installer with an advanced silent install

feature. Please see the documentation for details.

Event logs from the Security, Application and System logs, as well as the new DNS, File

Replication Service, and Active Directory logs are supported. The supported version of the agent

also accommodates custom Windows event logs. Log data is converted to text format, and

delivered to a remote Snare Server, or to a remote Syslog server with configurable and dynamic

facility and priority settings.

Snare is currently used by hundreds of thousands of individuals, and organizations worldwide.

Snare for Windows is used by many large Financial, Insurance, Healthcare, Defense, Aerospace,

and Intelligence organizations to meet elements of local and federal security requirements, such

as:

ACSI 33

GLBA (Gramm-Leach-Bliley Act)

Sarbanes Oxley (SOX)

C2 / CAPP

DCID 6/3

DIAM 50-4

DDS-2600-5502-87 Chapter 4

NISPOM Chapter 8

HIPAA

PCIDSS

California Senate Bill 1386

Page 79: Siem & log management

79 | P a g e

USA Patriot Act

Danish Standard DS-484:2005

British Standard BS7799

Snare for Windows is free software (freeware), released under the terms of the GNU Public

License (GPL).

Page 80: Siem & log management

80 | P a g e

Page 81: Siem & log management

81 | P a g e

Page 82: Siem & log management

82 | P a g e

Collecting Windows logs using Snare

Page 83: Siem & log management

83 | P a g e

Page 84: Siem & log management

84 | P a g e

Page 85: Siem & log management

85 | P a g e

Page 86: Siem & log management

86 | P a g e

Page 87: Siem & log management

87 | P a g e

Page 88: Siem & log management

88 | P a g e

Page 89: Siem & log management

89 | P a g e

Page 90: Siem & log management

90 | P a g e

Page 91: Siem & log management

91 | P a g e

Open source and free log analysis and log management tools

The open source log management tools are:

OSSEC (ossec.net) an open source tool for analysis of real-time log data from UNIX

systems, Windows servers and network devices. It includes a set of useful default

alerting rules as well as a web-based graphical user interface. This is THE tool to use, if

you are starting up your log review program. It even has a book written about it.

Snare agent (intersectalliance.com/projects/index.html) and Project Lasso remote

collector (sourceforge.net/projects/lassolog) are used to convert Windows Event Logs

into syslog, a key component of any log management infrastructure today (at least until

Visa/W7 log aggregation tools become mainstream).

syslog-ng (balabit.com/network-security/syslog-ng/) is a replacement and improvement

of classic syslog service - it also has a Windows version that can be used the same way

as Snare

Rsyslog (rsyslog.com) is another notable replacement and improvement of syslog

service that uses traditional (rather than ng-style) format for syslog.conf configuration

files. No Windows version, but it has an associated front-end called phpLogCon

Page 92: Siem & log management

92 | P a g e

Among the somewhat dated tools, Logwatch (logwatch.org), Lire (logreport.org)

and Logsurfer (crypt.gen.nz/logsurfer) can all be used to summarize logs into readable

reports.

sec (simple-evcorr.sourceforge.net) can be used for correlating logs, even though most

people will likely find OSSEC correlation a bit easier to use

LogHound (ristov.users.sourceforge.net/loghound)

and slct (ristov.users.sourceforge.net/slct) are more "research-grade" tools that are still

very useful for going thru a large pool of barely-structured log data.

Log2timeline (log2timeline.net/) is a useful tool for investigative review of logs; it can

create a timeline view out of raw log data.

LogZilla (aka php-syslog-ng) (code.google.com/p/php-syslog-ng) is a simple PHP-based

visual front-end for a syslog server to do searches, reports, etc.

The next list is "and honorable mentions" list which includes logging tools that don't quite fit the

definition above:

1. Splunk is neither free nor open source, but is has a free version usable for searching up to

500MB of log data per day - think of it as a smart search engine for logs. Splunk includes a

tool to extracting parameters out of log data

2. Offering both fast index searches and parsed data reports, Novell Sentinel Log Manager

25 is not open source, but can be used for free forever as long as your log data volume does

not exceed 25 log messages/second (25 EPS). Unlike Splunk above, it includes log data

parsing for select log formats and thus can be used for running reports out of the box, not

just searching

3. Q1Labs is also neither free nor open source, but is has a free version usable for managing up

to 50EPS (roughly 2GB/day). It can be downloaded as a virtual appliance

4. OSSIM is not just for logs and also includes OSSEC; it is an open source SIEM tool and can

be used much the same way as commercial Security Information and Event Management

tools are used (SIEM use cases)

5. Microsoft Log Parser is a handy free tool to cut thru various Windows logs, not just

Windows Event Logs. A somewhat similar tool for Windows Event log analysis is Mandiant

Highlighter (mandiant.com/products/free_software/highlighter)

6. Sguil is not a log analysis tools, but a network security monitoring (NSM) tool, but it uses

logs in its analysis.

7. Loggly now offers free developer accounts (at loggly.com/signup) for their cloud log

management service. The volume limitation is 200MB/day and retention time limitation is 7

Page 93: Siem & log management

93 | P a g e

days. If you'd like to collect and search your logs without running any software, this is for

you.

LogBinder - Getting Logs That Make Sense from SQL, Exchange and SharePoint

The Importance of Log Time Synchronization

1. The time of each log from each different system must be correlated by the SIEM in order

build the sequence of events required to be matched with the event rules (i.e. file upload

before file copy is not essentially information leakage)

2. Time mismatch within logs not only can deny the detection of security events upon

inaccurate correlation caused by time differences, it can also trigger false events

For Example, if someone inserted a CD, burned files and later on executed an EXE file

burned to the CD, on a mismatched time log correlation, the SIEM will trigger

“Execution of Software Originated from a CD”

3. Logs must be admissible to be used in court, time difference cannot exceed the

acceptable offset defined by the relevant/local/international law. (Admissibility Of

Electronically Filed Federal Records As Evidence)

Page 94: Siem & log management

94 | P a g e

Servers and Systems That Need to Collect Log Files

1. The first systems to collect log files from are the organization’s mission critical servers

and devices (i.e. Server Availability Status, login anomalies, sudden installations)

2. Second come the high and important service providing servers and devices

3. End Points

Critical Logs and Alerts to be collected from all

1. Operating System Security Alerts (local/network logon/logoff)

2. Anti-Virus alerts (“virus found”, “quarantine failed”, “antivirus shutdown”)

3. Data Leakage Prevention alerts (data exfiltration attempts, “unapproved file from a

foreign machine/user-group”)

Comprehensive Log Data Collection and Management

Being able to collect log data from across an enterprise regardless of their source, present the

logs in a uniform and consistent manner and manage the state, location and efficient access to

those logs is an essential element to any comprehensive Log Management and Analysis solution.

The Log collector solution was designed to address core log management needs including:

The ability to collect any type of log data regardless of source

The ability to collect log data with or without installing an agent on the log source

device, system or application.

The ability to "normalize" any type of log data for more effective reporting and analysis

The ability to "scale-down" for small deployments and "scale-up" for extremely large

environments

An open architecture allowing direct and secure access to log data via third-party

analysis and reporting tools

A role based security model providing user accountability and access control

Automated archiving for secure long term retention

Wizard-based retrieval of any archived logs in seconds

Cross-platform Log Collection

Today's IT operations require many technologies: routers, firewalls, switches, file servers, and

applications to name a few. Log collector has been designed to collect from them all through

intelligent use of agent-less and agent-based techniques.

Windows Event Logs: Agent-less or Agent-based

Page 95: Siem & log management

95 | P a g e

Log collector can collect all types of Windows Event Logs with or without the use of an agent.

Log collector collects Event logs via secure TCP transmission. Many Windows-based applications

write their logs to the Application Event Log or a custom Event Log.

Examples of supported log sources that can be collected by Log collector in real time include:

Windows System Event Log

Windows Security Event Log

Windows Application Event Log

Microsoft Exchange Server application logs

Microsoft SQL Server application logs

Windows based ERP and CRM systems application logs

Syslog

Many log sources, including most network devices (e.g. routers, switches, firewalls) transmit

logs via Syslog. Log collector includes an integrated Syslog server for receiving and processing

these messages. Simply point any syslog-generating device to Log collector and it will

automatically begin collecting and processing those logs.

Flat File (ASCII) Logs

Page 96: Siem & log management

96 | P a g e

Log collector can collect logs written to any ASCII-based text file. Whether it is a commercial

system or homegrown application, Log collector can collect and manage them.

Examples of supported log sources using this method include:

Web servers logs (e.g. Apache, IIS)

Linux system logs

Windows ISA server logs

DNS and DHCP server logs

Host based intrusion detection/prevention systems

Homegrown application logs

Exchange message tracking logs

Universal Database Log Collection and Management

Since so much sensitive information resides in databases, it is important to monitor and track

access and activity surrounding important databases. The actual and reputational cost of a theft

of customer records can be very large. Log collector can help. Log collector collects, analyzes,

alerts, and reports on logs from all ODBC-compliant databases including Oracle, Microsoft SQL

Server, IBM DB2, Informix, MySQL, and others. It also captures data from custom audit logs and

applications that run on the database. This capability enables customer to use Log collector for

real-time database monitoring to guard against insider and outsider threats.

Agent-less and Agent-based collection

While most log sources can be collected by Log collector via agentless methods, Log collector

also offers powerful, low profile agent technology for situations where they make sense.

Whether they are used for real-time flat file log collection or to aggregate and forward logs from

a remote site, Log collector agents are the perfect complement to any deployment.

Log collector agent features include:

Collection of any flat-file ascii text log in real time (e.g. web server and

application logs)

Page 97: Siem & log management

97 | P a g e

Transmission over secure TCP

Ability to aggregate and forward logs from multiple sources from any remote

site (e.g. retail store, branch location).

Optional encryption during transmission

Ability to schedule transmission if needed (e.g. due to bandwidth constraints)

File-integrity monitoring

Collection load-balancing for distributed deployments

Scalable Log Centralization

Log Rhythm is architected to scale easily and incrementally as your needs grow. Whether you

need to collect 10 million or more than 1 billion logs per day, Log Rhythm can handle it. With

Log Rhythm, you simply deploy the capacity you need when you need it, preserving your initial

investment along the way. Deployments can start with a single, turnkey appliance and grow

easily by adding incremental log manager appliances as needs expand. With Log Rhythm’s

building block distributed architecture, you can access and analyze logs throughout your

deployment with ease.

Log Archiving and Retrieval

Many businesses have compliance requirements to preserve historic log data and be able to

provide it in its original form for legal or investigative purposes. Collecting, maintaining and

recovering historic log data can be expensive and difficult. Imagine trying to recover logs from a

specific server two years ago. Were the logs archived or saved anywhere? If so, where have the

logs been stored? What format are they in? Can the correct archived log files be identified

among the tens of thousands (or millions) of other archive files… in a reasonable period of time?

With Some solutions, the answers to these questions are easy.

Log collector completely automates the process of archiving and restoring log data. Log collector

automatically archives unaltered log data to “sealed” self-describing files that are saved,

organized and tracked by the system. Archive files can be saved on Log collector appliances or

any network storage device you choose. Log collector uses a SHA-1 hash and compresses the

logs in a non-proprietary format to protect log integrity. Compression typically results in a 95%

reduction in storage requirements and associated cost. Archive files also include bookkeeping

information such as where and when the log data originated and other key characteristics.

Recovering historic logs is a snap. The Archive Restoration Wizard makes it easy to restore based

on specific filtering criteria like date, user, system, etc. Hit start and Log collector takes care of

Page 98: Siem & log management

98 | P a g e

the rest. Once restored, log data can be analyzed using standard Log collector analysis tools.

What could have been weeks’ worth of effort becomes minutes with some solutions.

Activity Auditing

For compliance verification, users’ and administrators’ actions within Log collector are logged.

Log collector user activity reports provide powerful proof that Log collector is actively used to

analyze log data for compliance purposes.

Choosing Your Solution

Log Management, Analysis, Correlation, and Compliance Reporting

Overview:

Over the last decade, government and industry specific regulations have required organizations

to develop an effective log management platform. These platforms are complex; require routine

maintenance and review, event correlation, and the ability to provide near real time reporting

for compliance and security audits. In addition the costs of these systems are continuing to

increase as organizations struggle with the sheer amount of logs generated by their applications

and devices, as well as the ongoing storage requirements to retain the data for years.

Some log collectors provide customers with a best of breed cloud service for providing full log

management life cycle solutions.

This solution includes the following:

Page 99: Siem & log management

99 | P a g e

Log collection and parsing

Event correlation and notification

Integrated managed security services

Analysis and reporting

Compliance support

Benefits:

Some log collectors, Correlation, and Compliance Reporting service provides the following

benefits to our customers:

Global log collection with no agents required

Safe storage of log events in a SAS 70 Type II audited redundant datacenters

Store and archive data according to business and security data retention policies

Low administrative overhead

Security as a Service

24×7 global support

Industry leading performance

Service backed by SLA

Low monthly cost

Access to the portal and executive dashboards

Log Analysis

Would it be valuable for you to discover which users outside of a trusted user community had

accessed a file server that stores highly sensitive information? What about knowing which

systems might be affected by a zero day exploit and prioritize them based upon the asset value

of the impacted hosts? How about being able to automatically be alerted when transactions in

your financials application exceed a certain dollar amount? Some solutions provide

comprehensive log analysis engine can cull this level of insight from millions or even hundreds of

millions of logs in real time.

Automated Log Analysis

While some log entries can be extremely interesting and relevant to daily operations, many can

also be extremely uninteresting, at least in the short term. Still, it is important to collect and

manage all logs to ensure you don’t miss anything and can find what you need when you need

it. With manual or homegrown solutions, you would be searching for the proverbial needle in

the haystack. With Some solutions, search, forensic analysis, trending and alerting are

simple. Some solutions processes and normalizes logs to make it easy to identify and find

anything. Some solutions provide intuitive and powerful analysis tools make any kind of analysis

a breeze.

Page 100: Siem & log management

100 | P a g e

Some solutions automate the process of finding interesting log entries via a powerful and

customizable log identification engine. When a log is identified, it is "normalized" for analysis

and reporting purposes. The log is assigned a "common name" and classified as either security,

operations, or audit related. Additional reporting information is parsed from the text of the log

such as IP addresses, UDP/TCP port numbers and logins.

An important aspect of log normalization is time synchronization. In many IT operations,

systems are spread across time-zones and system clocks aren't synchronized to a single

source. For this reason, some solutions automatically synchronize the timestamps of all log

entries to a single 'normal time' for reporting and log analysis purposes. This is extremely

valuable in analyzing log data across distributed systems where time of occurrence is important.

If one log was written at 3:00 PM EST and across the country, another log was written at 12:00

PM PST, within some solutions they both occurred at the same time.

Risk-based Prioritization

Some solutions automatically prioritize each event based on its impact to your business'

operations.

Some solution’s risk-based prioritization calculates a 100 point priority based on the:

Type of event

Likelihood event is a false alarm

The threat rating of the host causing the event (e.g., remote attacker), and

The risk rating of the server on which the event occurred

Page 101: Siem & log management

101 | P a g e

Some solution’s risk-based priority helps ensure the most important events are

identified and acted upon.

The impact of an event varies by business and within a business, by system. For instance, a

router link failure might not be immediately critical for an ISP with redundant routers. However,

for a branch office with a single router, business is impacted until fixed. A server reboot is

uninteresting if seen on a user workstation but when seen from an ERP server that has 99.999%

uptime requirements, is extremely interesting.

Event Forwarding to Reduce Data for Improved Log Analysis

Identified log entries having the most immediate operational relevance are forwarded to the

Event Manager. This typically includes security events, audit failures, warnings and

errors. Event forwarding rules work “out of the box.” You also have the ability to tailor those

rules to your liking and create your own rules. The function of intelligently forwarding a subset

of logs provides the first layer of data reduction.

Log activity for specific filename patterns, IP addresses, hosts or users can also be monitored

easily. When security policies are violated, some solutions can automatically alert designated

individuals via e-mail, pager, existing management applications and the some solutions console.

Because only the most important log entries are forwarded as events, users are extremely

efficient with time they spend using the some solutions solution. Instead of having to weed

through numerous irrelevant log entries, the most important logs are automatically identified

for them.

Some solutions features contextual event forwarding, which enables real-time identification and

alerting of anomalies within application, database and network activity? For example, some

solutions can be used to pinpoint specific exceptions such as transactions greater than a

specified dollar amount in a financial application, including when it occurred, who was

responsible, and which account was modified.

User-Driven Log Analysis

Once logs are collected, classified, normalized, prioritized, stored and correlated, some rise to

the level of an “event”. The Some solutions Event Management function applies the real-time

monitoring, alerting, incident management and response appropriate for specific events. Some

events warrant a deeper investigation beyond the events themselves to include other related

log data. For these situations some solutions offers a comprehensive set of investigative

capabilities ranging from high-level trending and visualization to monitoring in real time the

activities associated with a specific user, system, device or information asset.

Page 102: Siem & log management

102 | P a g e

LogMart Delivers Visual Log Analysis

The Some solutions LogMart tool incorporates a powerful set of visualization, data trending and

search capabilities. LogMart aggregates millions of logs in a single graphical view, which can

expose exceptions in security, compliance and operations over short or long periods of

time. The powerful user-configurable charting and filtering capabilities enable users to quickly

switch from viewing months or even years’ worth of log trend data to drilling down to individual

logs exposing the root cause of a security breach or operational problem.

Investigator and Search

The Some solutions Investigator is a powerful investigation tool used for searching and viewing

specific sets of logs and events, such as those associated with a specific user, set of users,

specific IP address or range, impacted hosts, impacted applications, date and time, and

more. An easy to use wizard guides users through the selection of criteria for their specific

investigation. Once defined, investigation criteria can be saved and used again. Investigations

can include events, log metadata, raw log data or any combination thereof.

Some solutions also offer comprehensive search capabilities to meet the unique log analysis needs

of a variety of users. Whether you're an investigator looking for all activity associated for a

specific user, an IT operations manager seeking to understand performance trends for a

particular server or an auditor looking for a list of individuals outside of a trusted user

community that accessed a highly sensitive file server over the last 90 days, Some solution’s

quick search function can serve up unique and highly valuable information derived from millions

of logs quickly and easily.

Page 103: Siem & log management

103 | P a g e

Security Event Management

Some solution’s Security Event Management function combines prepackaged, automated reporting and

real-time monitoring and alerting with comprehensive incident management and

response. Some solution’s Personal Dashboards present security event information in the most

useful and effective manner to meet the specific needs of individual users. The dashboard also

acts as a portal to a suite of highly effective investigative and reporting tools including the some

solutions Investigator and LogMart. The Quick start Event Management Package (QsEMP)

delivers the convenience of prepackaged alarms and a broad range of detailed and executive

level reports for immediate, valuable usability and rapid time-to-value.

Quickstart Event Management

Some solution’s QsEMP combines the efficiency of automation with the convenience of

prepackaged expertise. The QsEMP provides IT administrators and executives with a relevant

and useful set of out-of-the-box reports and alarms covering practical Operations, Security and

Audit/Compliance use cases. Combined with Some solution’s powerful and intuitive user-driven

forensics, reporting and alarming capabilities, the QsEMP delivers powerful insight while

reducing the overall Total Cost of Ownership (TCO).

Page 104: Siem & log management

104 | P a g e

Real-time Monitoring

Because Some solutions collects and analyzes logs in real-time, logs deemed to be security

events are immediately forwarded as such and are escalated according to their level of

criticality. Security event information is delivered in real time to the personal dashboards of those

users predefined as authorized viewers for those classifications of events. Through the personal

dashboard users can monitor security events in real time and quickly review and drill down as

appropriate. Some solutions dashboards can be easily customized by and for each user. As a

result, every user sees and can analyze the information that is most relevant to them and their

role.

Advanced Correlation and Pattern Recognition

Some solution’s Advanced Intelligence (AI) Engine delivers advanced correlation and pattern

recognition for enterprise log data to the event management layer for truly comprehensive

coverage. AI Engine leverages its integration with the log and event management functions

within the some solutions platform to correlate against all log data – not just a pre-filtered

subset of security events. Relevant patterns and correlated event sequences are sent to the

event management layer in real time. Seamless integration enables immediate access to all

forensic data directly related to an event. AI Engine automatically delivers real-time event

management for immediate visibility into risks, threats and critical operations issues.

Role-based Alerting

Some solutions can easily be configured to send alerts on critical events or combinations of

security events to an individual or groups of individuals based upon user roles, asset values of

impacted systems or applications, or a variety of other factors related to ensuring the right

alerts reach the right people at the right time.

Page 105: Siem & log management

105 | P a g e

Intelligent, Automated Remediation

Some solutions deliver immediate protection from security threats, compliance policy violations

and operational issues with and operational issues with Smart Response. Intelligent, process-

driven capabilities give organizations the power to automatically take action in response to any

alarm. Smart Response delivers immediate action on real-world issues, such as when suspicious

behavior patterns are detected, specific internal or compliance-driven policies are violated, or

critical performance thresholds are crossed. Some solutions ensures that responses are based

on accurate information by performing real-time analysis on all log data, helping to minimize

false positives as well as the delays associated with manual intervention.

Incident Management & Response

The Some solutions solution includes comprehensive incident management capabilities.

Incidents (alarms) are viewed and managed via the real-time personal dashboard. Every action

taken on an alarm is documented (who was notified, when it was analyzed, work that was done,

etc.) as part of the alarm history. A comprehensive set of reports provides a full history of

incident management activity and response. Whether your requirements for tracking incident

management activities are driven by compliance mandates or internal best practices, the some

solutions incident management functions will deliver on your reporting, tracking and audit

needs.

Role-based Alerting

Some solutions can easily be configured to send alerts on critical events or combinations of

events to an individual or groups of individuals based upon user roles, asset values of impacted

systems or applications, or a variety of other factors related to ensuring the right alerts reach

the right people at the right time.

Page 106: Siem & log management

106 | P a g e

Incident Management & Response

The Some solutions solution includes comprehensive incident management capabilities.

Incidents (alarms) are viewed and managed via the real-time personal dashboard. Every action

taken on an alarm is documented (who was notified, when it was analyzed, work that was done,

etc.) as part of the alarm history. A comprehensive set of reports provides a full history of

incident management activity and response. Whether your requirements for tracking incident

management activities are driven by compliance mandates or internal best practices, the Some

solutions incident management functions will deliver on your reporting, tracking and audit

needs.

Intelligent IT Search

Would it be valuable for you to be able to discover which users outside of a trusted user

community had accessed a file server that stores highly sensitive information? What about

knowing what systems had been affected by a zero day exploit and prioritize them based upon

the asset value of the impacted hosts? How about being able to automatically be alerted when

transactions in your financials application exceed a certain dollar amount?

Logs are the digital fingerprints for virtually all network, system and application

activity. Whether you’re searching for the root cause of a system failure or performance issue,

looking for present or potential threats, conducting an IT investigation or satisfying an

Page 107: Siem & log management

107 | P a g e

eDiscovery request from Legal or HR, chances are you’ll be searching through log data.

For IT professionals, the question isn’t whether or not you’ll be searching log data, the question

is how quickly you can find the information you’re looking for, if at all. Will it take days, weeks

or months, or can you find it with a few clicks of the mouse? The answer depends on 4 things:

Is your log data collected centrally from all log sources and stored in an intelligent indexed

format?

How well has your log data been enriched and prepared for intelligent search?

How intuitive and quick is the search process?

How meaningful and insightful are the search results?

Traditional approaches to log search require users to know precisely what they are looking for, and to create, then refine search terms to locate events that map to their query. Some solutions processes logs and tags them using a rich and granular three tier classification model that enables users to perform intelligent IT search. This capability assesses the impact of events in multiple dimensions to extract meaning from what would otherwise appear to be just isolated logs

.

By adding this additional intelligence to raw logs, Some solutions enables IT organizations to

quickly identify internal and external threats, operations issues and compliance

violations. Additionally, Intelligent IT Search simplifies and accelerates forensic investigations

and eDiscovery responses.

Page 108: Siem & log management

108 | P a g e

Adding Intelligence to Raw Logs

Some solutions enriches logs with the following information to generate query results that

provide intelligence… not simply data:

Universal time stamp for every log: Essential for accurate correlation and

contextualization, especially when conducting forensic analysis of events that span

multiple geographies.

Three Tier Classification System

o Security: Compromise, Attack, Denial of Service, etc.

o Operations: Critical Event, System Error, Warning, etc.

o Audit: Admin Account Creation, Failed Authentication, etc.

Prioritization of Events - 100 point risk model prioritizes events based on what

happened, what systems or applications were impacted, what users were involved, etc.

User and Host Contextualization – Differentiates origin from impacted users and

hosts. Enables security teams to rapidly identify exposure, impacted users and systems,

determine the origin of threats and the direction of the activity. For example, a large file

transfer (10 MB) from a sensitive internal database (SAP) to an external IP address (in

Romania).

Utility Tool Chest for Intelligent IT Search

Once log data is enriched, Some solution’s broad suite of search utilities empowers users to

rapidly investigate, view, correlate and visualize logs in a variety of ways to meet specific search

objectives. The Intelligent Search Utilities include:

Wizard-based Search - Easily create complex search criteria across normalized, classified

and contextualized data

Real-time Search- Apply search criteria to log data as it is generated in real time via

some solutions Tail. Configure alerts to be sent whenever conditions with specified

search criteria occur in the future.

Visualization - Present millions of logs in 3-D graphical representation to discover

anomalies and analyze trends

One-click Correlation - Rapidly refine search with a single click on related data

Quick Search Tool Bar - Provides rapid search initiation directly from any screen

Investigator and Search

The Some solutions Investigator is a powerful investigation tool used for searching and viewing

specific sets of logs and events, such as those associated with a specific user, set of users,

specific IP address or range, impacted hosts, impacted applications, date and time, and

Page 109: Siem & log management

109 | P a g e

more. An easy to use wizard guides users through the selection of criteria for their specific

investigation. Once defined, investigation criteria can be saved and used again. Investigations

can include events, log metadata, raw log data or any combination thereof.

Some solutions also offer comprehensive search capabilities to meet the unique needs of a

variety of users. Whether you're an investigator looking for all activity associated for a specific

user, an IT operations manager seeking to understand performance trends for a particular

server or an auditor looking for a list of individuals outside of a trusted user community that

accessed a highly sensitive file server over the last 90 days, Some solution’s quick search

function can serve up unique and highly valuable information derived from millions of logs

quickly and easily.

Live & Near Real-Time Monitoring

Advanced Intelligence Engine - AI Engine delivers real-time visibility to risks, threats and

critical operations issues.

Some solution’s Advanced Intelligence Engine (AI) is an optional module for any some solutions

deployment, offering sophisticated correlation and analysis of all enterprise log data in a uniquely

intuitive fashion. With a practical combination of flexibility, usability and comprehensive data

Page 110: Siem & log management

110 | P a g e

analysis, AI Engine delivers real-time visibility to risks, threats and critical operations issues that

are otherwise undetectable in any practical way. AI Engine is Correlation That Works!

With over 100 preconfigured, out-of-the-box correlation rule sets and a wizard-based drag-and-

drop GUI for creating and customizing even complex rules, AI Engine enables organizations to

predict, detect and swiftly respond to:

Sophisticated intrusions

Insider threats

Fraud

Compliance violations

Disruptions to IT Services

And many other critical actionable events

Comprehensive Advanced Correlation

Unlike legacy SIEM solutions, AI Engine leverages its integration with the log and event

management functions within the some solutions platform to correlate against all log data – not

Page 111: Siem & log management

111 | P a g e

just a pre-filtered subset of security events. Seamless integration also enables immediate access

to all forensic data directly related to an event.

AI Engine rules draw from over 50 different metadata fields that provide highly relevant data for

analysis and correlation. Whether detected by out-of-the-box rules or user-created/modified

rules, AI Engine identifies and alerts on actionable events with tremendous precision, for

operations, security and compliance assurance. AI Engine can also be used to cast a wide net

through generalized correlation rules for broader visibility that accommodate changes in event

behavior.

TrueTime

Some solutions apply a universal timestamp to every log as it is processed. This ensures that the actual time of occurrence of an activity is recorded accurately – regardless of external factors, such as an out-of-sync server clock, delayed delivery of a log or differences in time zones. TrueTime guarantees that advanced correlation within AI Engine is based on chronological fact – recognizing the true sequence of events – minimizing false positives and avoiding false negatives.

AI Engine Delivers:

Advanced Correlation Against All Log Data

TrueTime Event Sequencing

Immediate Access to Underlying Forensic Data

Generalized and Targeted Rules

Extensive Out-of-the-Box Advanced Correlation Rules

Unparalleled Ease of Use

AI Engine in Action

AI Engine’s numerous predefined advanced correlation rule sets are configured to run “out-of-

the-box” and act as templates for easy customization. All rules within AI Engine can be quickly

modified through a highly intuitive GUI to address unique requirements of any organization.

Secure

A single event is not always enough to indicate a breach or show the true reach of a security

incident. AI Engine recognizes common security incidents and automatically correlates them

against suspicious behavior patterns to automatically identify and alert on aberrant activity. For

example, malware can invade and spread through an organization quickly, exposing data and

Page 112: Siem & log management

112 | P a g e

weakening security faster than administrators can react. In many cases, the extent of damage is

unknown.

Examples:

Malware is detected on one host followed by attacks from that affected host

Suspicious communication from an external IP Address is followed by data being

transferred to the same IP Address

A user logs in from one location, does not log out, but logs in from another city or

country in a short timeframe

Comply

AI Engine can assist in automating compliance controls, generating events when specific policy

violations occur. These include protecting cardholder data or Protected Health Information

(PHI) from unauthorized access and actively monitoring privileged user behavior.

Examples:

Five failed authentication attempts followed by a successful login to a database

containing ePHI followed by a large data transfer to the user’s machine all within 30

minutes

A file containing credit card data is accessed, followed by an attempt to transfer

information from the same host to a USB thumb drive within 10 minutes

Creating one or multiple accounts and escalating their privileges in a short period of

time

Optimize

Advanced correlation offers substantial value for operational insight and IT services

assurance. Slight variations in specific activities or a particular sequence of more common

operations events may indicate critical operations issues.

Page 113: Siem & log management

113 | P a g e

Examples:

A backup process is started, but no log for backup completed is generated

A critical process stops and doesn’t start back up within a specific timeframe

High I/O rates on a critical server usually only observed during backup procedures are

observed during normal business hours.

AI Engine Deployment Option

Designed to integrate with any core some solutions deployment AI Engine can be purchased as a

turnkey appliance, installed as software on dedicated customer equipment or deployed on

multiple virtualization platforms, including VMware ESX, Microsoft Hyper-V, and Citrix

XenServer. High performance appliances can process tens of thousands of logs per second and

billions of logs per day. AI Engine follows some solution’s building-block architecture –

expansion is as simple as plugging in an additional appliance. All appliances are centrally

configured, monitored, and managed through some solution’s universal console.

Detecting Important Events

Self-Observation Events – Detecting systems that stopped reporting events to the SIEM

Detecting anomalies

systems status (stuck, hang, crash, infinite restarts)

Network Traffic

Internet Traffic

CPU

Disk Activity (Read/Write vs. Statistics)

Specific/Dangerous/Defined Content Types

Challenges with Log Management

Lack of Standard Log Format

Meeting compliance mandates has caused vendors to build log management solutions that

focus on storage and canned reporting, but don't make the data useful for day-to-day

operations, security, and the deluge of one-off requests from auditors. Existing log management

solutions are too narrow, having been built to use log data for compliance, when in fact log data

contains an important source of truth critical for troubleshooting issues and supporting broader

business objectives. And why stop with the log data? Application logs and other machine data

also contain important data which traditional log management solutions simply miss.

How are you managing access to and analysis of your log data today? Can you access all your

logs from one central location? Can you quickly search and analyze your logs to troubleshoot

issues, meet compliance requirements and investigate security threats?

Page 114: Siem & log management

114 | P a g e

Some solutions automatically index all the data, including complex multi-line application logs,

enabling you to search on all the data without need for custom connectors, and without

limitations inherent in database schemas. Some solutions allow quick searching and report on

this data - and interpret the data as you search providing a more complete context. The result is

a more flexible and complete approach to using and analyzing log data, enabling you to

diagnose issues and troubleshoot security incidents faster, and providing repeatable and

affordable compliance. Your log management capabilities are now more powerful, flexible, and

no longer limited to "select" data sources or a "fixed" set of reports.

Reacting: Traditional Approach to Computer Management

Policy & Process

Review your logs

o Access logs

o Application logs

o Tomcat logs

o System (e.g. firewall) logs

What do you do if you find an attempted attack?

What do you do if you find an successful attack?

What do you do if a Tomcat vulnerability is announced?

Ensuring the existence and accumulation of Forensic Evidence

1. Simulating a few attacks to measure and test the SIEM’s responsiveness

2. Simulating top 10 different security incidents and reviewing forensic evidence and logs

obtainable to execute a forensic investigation (in cooperation with an external certified

computer forensics professional)

3. Fixing the missing parts

4. Repeating the process periodically (at least every 6 months)

Learning from your Log files

Standard web servers like Apache and IIS generate logging

messages by default in the Common Log Format (CLF) specification.

The CLF log file contains a separate line for each HTTP request. A

line is composed of several tokens separated by spaces:

Key fingerprint = AF19 FA27 2F94 998D FDB5 DE3D F8B5 06E4 A169 4E46

Page 115: Siem & log management

115 | P a g e

Attacks on Web Applications Detecting Attacks on Web Applications

host ident authuser date request status bytes if a token do not have a value, and

then it is represented by a hyphen (­). Tokens have these meanings:

● host: The fully qualified domain name of the client, or its IP address.

● ident: If the IdentityCheck directive is enabled and the

client machine runs identd, then this is the identity information reported by the client.

● authuser: If the requested URL required a successful

Basic HTTP authentication, then the user name is the value of this token.

● Date: The date and time of the request.

● Request: The request line from the client, enclosed in double quotes (").

● Status: The three digit HTTP status code returned to the client.

● bytes: The number of bytes in the object returned to the

client, excluding all HTTP headers.

A request may contain additional data like the “referer” and

the user agent string. Let us consider an example of log entry (in

the Combined Log Format [Apache Combined Log Format, 2007]):

127.0.0.1 ­ frank [10/Oct/2007:13:55:36 ­0700]

"GET /index.html HTTP/1.0" 200 2326

"http://www.example.com/links.html" "Mozilla/4.0 (compatible; MSIE

7.0; Windows NT 5.1; .NET CLR 1.1.4322)"

Key fingerprint = AF19 FA27 2F94 998D FDB5 DE3D F8B5 06E4 A169 4E46

Attacks on Web Applications Detecting Attacks on Web Applications

127.0.0.1: the IP address of the client: The "hyphen" in the output indicates that the requested

piece of information is not available. In this case, the

information that is not available is the RFC 1413 identity of the

client determined by identd on the client’s machine.

Frank: This is the userid of the person requesting the

document as determined by HTTP authentication.

[10/Oct/2007:13:55:36 ­0700]: The time that the server finished processing the request.

"GET /index.html HTTP/1.0": The request line from the client is given in double quotes.

200: This is the status code that the server sends back to the client.

2326: This entry indicates the size of the object returned

to the client, not including the response headers.

"http://www.example.com/links.html": The "Referer" (sic) HTTP request header.

Page 116: Siem & log management

116 | P a g e

"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR

1.1.4322)": The User­Agent HTTP request header.

Interesting Anecdotes about log files:

1. Server must have write-only access to Log files

2. Log files should be saved on a different machine or at least a different partition

3. Log files should be encrypted

Interesting Anomalies to find in your log files:

1. GET Based Attacks:

a. XSS attempts:

i. <script>alert()</script>…

ii. alert(

iii. All the common attack patterns listed in “XSS CheatSheet”

b. SQL Injection

i. “Select/insert/update/alter/drop/load_file/bulk insert”

ii. Ascii(substring(

iii. And/or [x] [<>=] [x]

c. Local File Inclusion to Remote File Inclusion Via Apache Log PHP Injection

i. Looking for “<?php .* ?>” in the log file

ii. include "data:;base64,PD9waHAgcGhwaW5mbygpOz8+";

iii. require|include|system|passthru|eval\s+($[a-zA-Z0-9\-\_\.\/\\]+)

d. Detecting Directory Traversal Attacks

i. ../

ii. ..\

iii. %2e%2e%2f

e. IP Based Server References

i. Clients that referred the server using its External IP address as opposed

to its domain name usually indicate the client came from an IP Scan and

not a web based reference.

ii. HTTP referrer header – references to any pages except the website root

page without a referrer field usually indicate a manual attack and not a

normal user.

2. Post based attacks

a. Correlating suspicious IPs from GET identified attacks

b. Identifying repeated POST requests to the same pages with minor changes in

request size

c. If intensive suspicion builds up, changing logging configuration to include POST

request parameters in the logs.

Page 117: Siem & log management

117 | P a g e

Detect Alternate Data Stream Trojans

Advanced Attack Logging with Mod_Security – An OpenSource Web Application Firewall

# Stop apache – if installed and running

service httpd stop

# Install Apache APR Utility Library

yum install *apr*

# Install HTTP Development Libraries

yum install *http-devel*

# Install Required Libraries

yum install *pcre-devel*

yum install libxml2

yum install libxml2-devel

yum install curl-devel

yum install gcc-c++

# Fix mod_security compile to find libxml2

export C_INCLUDE_PATH=/usr/include/libxml2/:$C_INCLUDE_PATH

export CPLUS_INCLUDE_PATH=/usr/include/libxml2/:$CPLUS_INCLUDE_PATH

export C_INCLUDE_PATH=/usr/include/curl/:$C_INCLUDE_PATH

export CPLUS_INCLUDE_PATH=/usr/include/curl/:$CPLUS_INCLUDE_PATH

# Download the latest stable version of Mod_Security

http://www.modsecurity.org/download/modsecurity-apache_2.6.0.tar.gz

# Extract & Compile

tar -xvf modsecurity-apache_2.6.0.tar.gz

cd modsecurity-apache_2.6.0/Apache2/

./configure

make

make install

# Install OWASP mod_security Core Rule Set

svn co https://mod-security.svn.sourceforge.net/svnroot/mod-security/crs/trunk

Page 118: Siem & log management

118 | P a g e

/etc/httpd/conf/modsecurity/

# Turn on the Example Configuration File:

cd /etc/httpd/conf/modsecurity/

mv modsecurity_crs_10_config.conf.example modsecurity_crs_10_config.conf

# Disable the following rules

cd /etc/httpd/conf/modsecurity/base_rules/ mv modsecurity_crs_21_protocol_anomalies.conf modsecurity_crs_21_protocol_anomalies.conf_.disable

mv modsecurity_crs_30_http_policy.conf modsecurity_crs_30_http_policy.conf_.disable

mv modsecurity_crs_41_phpids_converter.conf modsecurity_crs_41_phpids_converter.conf_.disable

mv modsecurity_crs_41_phpids_filters.conf modsecurity_crs_41_phpids_filters.conf_.disable

mv modsecurity_crs_49_enforcement.conf modsecurity_crs_49_enforcement.conf_.disable

mv modsecurity_crs_60_correlation.conf modsecurity_crs_60_correlation.conf.disable

# Active Mod_Security inside the Apache Configuration File:

Include conf/modsecurity/*.conf

LoadFile /usr/lib/libxml2.so

LoadModule unique_id_module modules/mod_unique_id.so

LoadModule security2_module modules/mod_security2.so

# ModSecurity Configuration

# Turn the filtering engine On or Off

SecFilterEngine On

# Add the server name (mod_security directive)

SecServerSignature "Microsoft-IIS/7.5"

# Disable Server Signature (don’t print “Server: Apache mod_xxx…”)

ServerSignature Off

# Make sure that URL encoding is valid

SecFilterCheckURLEncoding On

# Unicode encoding check

SecFilterCheckUnicodeEncoding Off

# Should mod_security inspect POST payloads

SecFilterScanPOST On

Page 119: Siem & log management

119 | P a g e

# By default log and deny suspicious requests

# with HTTP status 500

SecFilterDefaultAction "deny,log,status:500"

# Use ReleventOnly auditing

SecAuditEngine RelevantOnly

# Must use concurrent logging

SecAuditLogType Concurrent

# Send all audit log parts

SecAuditLogParts ABIDEFGHZ

# Check the apache config:

/usr/sbin/apachectl -t

# Restart Apache

/etc/init.d/httpd restart

Reacting to Events

Defining Top Event Scenarios

1. Single/Repeated Blue Screen

Response: automated, alert security team for immediate inspection

2. “Device Stopped Reporting” Any device that its last reported alert was not

“shutting down” and which doesn’t send any alerts to the SIEM for over X

amount of time (considerably the time of a reboot)

Response: automated, alert security team for immediate inspection

3. “Driver Loaded”/”Service Started” of an unknown/unsigned/new driver file on

any machine which is NOT new/at an installation/setup/upgrade process (root-

kit)

Response: automated, transfer to an isolated VLAN with no access to

any machine nor the internet and alert security team for immediate

inspection

4. WSUS success report is less than 50%, 1 day after “Patch Tuesday”

Response: automated, alert security team for immediate inspection

Page 120: Siem & log management

120 | P a g e

5. An external unknown IP from the firewall log reported of “port scanning” event

together with any time an internal machine creating a connection to that IP, are

a security event of type breach/intrusion/hacking

Response: automated, transfer to an isolated VLAN with no access to

any machine nor the internet and alert security team for immediate

inspection

6. A certain Hostname|IP|MAC tries to logon into the network with a specific user

and fails, X time after same IP succeed login with a different user, “successful

network brute force”. (only relevant on non-dispatcher stations when

people/users don’t switch stations)

Response: automated, transfer to an isolated VLAN with no access to

any machine nor the internet and alert security team for immediate

inspection

In alert, include user presence status from “work presence reporting

system” (came to work, in the building, in floor 5, last action: left floor 6

and etc…)

7. Existing/New User is added to Domain\Administrators group

Response: automated, if during work hours: alert security team for

immediate inspection else: disable user account

In alert, include all IPs, NetBIOS sessions and applications connected to

this domain controller

8. New local user with administrator privileges is created on a machine

(endpoint/server)

Response: automated, if during work hours: alert security team for

immediate inspection else: disable user account

In alert, include all IPs, NetBIOS sessions and applications connected to

this machine

9. An executable file that has no service linking to it and is not a child process of a

known service is running under SYSTEM account

Response: automated, if during work hours: alert security team for

immediate inspection else: kill process

10. A process that is not a known “Debugger Application” has “Debug Privilege”

Also possible to add “if the process all calls GetAsyncKeyBoardState

/GetKeyBoardState/SetWindowsHookEx” and etc...

Response: automated, if during work hours: alert security team for

immediate inspection else: kill process

11. “Any unexpected application shutdown”/crash of any application which handles

mime types acceptable as attachments via email

Page 121: Siem & log management

121 | P a g e

Response: automated, if during work hours: alert security team for

immediate inspection else: alert and transfer to an isolated VLAN with

no access to any machine nor the internet

Designing & Defining the response actions

Smart Response

Some solutions deliver immediate protection from security threats, compliance policy violations

and operational issues with Smart Response. Intelligent, process-driven capabilities give

organizations the power to automatically respond to any alarm. Smart Response delivers

immediate action on real-world issues, such as when suspicious behavior patterns are detected,

specific internal or compliance-driven policies are violated, or critical performance thresholds

are crossed. Some solutions ensures that responses are based on accurate information by

performing real-time analysis on all log data, helping to minimize false positives as well as the

delays associated with manual intervention.

Automated Remediation That Works for You

Many organizations find that implementing automated remediation creates more risk than it is designed to prevent. One of the problems is that it is typically an all-or-nothing process, meaning any enabled action will be taken without providing an option for external validation. Because of the number of variables tied to an individual event and the risks associated with incorrectly interrupting critical operations, most organizations are justifiably reluctant to employ automated remediation beyond that tied to the most mundane use cases. Some solution’s Smart Response was specifically designed so that any action can be easily configured to meet important organizational policies and to provide assurances that each response is the correct action to take. It comes with an optional, built-in approval process that can require up to 3 levels of authorization prior to taking action. That gives organizations the option of reviewing the facts first – before the wrong person’s access is removed or a critical application is mistakenly shut down. And if that particular remediation is determined to be the correct course of action, the response is already queued up for immediate execution at the click of a button.

Page 122: Siem & log management

122 | P a g e

How It Works

A simple, plug-in based GUI allows administrators to import any script-based response, which

can then be activated by any advanced correlation or event-based alarm. Some solution’s Smart

Response includes:

Optional requirements for up to three levels of authorization

Targeted responses to exact alarm parameters, such as:

Suspicious IP addresses to block

Specific rogue users to quarantine

Individual processes to start or stop

Page 123: Siem & log management

123 | P a g e

Over 50 unique fields for maximum precision

Incident Response Management with:

Current remediation status

Alarm recipient tracking

Authorization path auditing

One-click testing for script validation

Smart Response in Action

Some solutions Labs provides out-of-the-box access to practical scripts designed to address

common organizational issues related to security, compliance and operations. Smart

Response can execute any script that a user can create, with optional safeguards to require up to

three levels of authorization before performing any action. Examples include:

Advanced Threat Detection & Response

(External)

Advanced Threat Detection & Response

(Internal)

Problem

Malware frequently

attempts to access an

environment by logging in

to multiple servers,

moving from one target to

the next until access is

granted.

Detection

Some solutions can alarm

on suspicious behavior,

such as access attempts to

multiple hosts within the

network from a single IP

Address or nonwhite-

listed location.

Problem

Systems administrators have

the ability to access and

modify systems and create

accounts with escalated

privileges, allowing them to

engage in a broad range of

malicious activity while

avoiding detection.

Detection

Some solutions can notify

when any new account with

escalated privileges is

created, or if suspicious

modifications have been

made to accounts accessing

critical systems.

Page 124: Siem & log management

124 | P a g e

Action

Smart Response can pull

the attacking IP Address

directly from an alarm and

add it directly to a firewall

ACL, instantly terminating

potentially dangerous

access to your network.

Action

Smart Response can

automatically remove newly

added or recently modified

privileged accounts until the

activity has been verified as

legitimate.

Compliance Automation & Assurance

Operational Intelligence & Optimization

Problem

Many compliance regulations

require strict access controls

to confidential data, such as

protected health information

(PHI) or customer credit card

accounts.

Detection

Some solutions can

determine which users are

authorized to access critical

assets or specific files,

detecting in real-time when

an access policy is violated

and generating an alarm.

Action

Smart Response can

immediately remove any

user guilty of an access

violation from the network

until the incident can be

investigated, actively

enforcing policy and

protecting critical assets.

Problem

Detecting when all aspects of

a server have restarted

properly after routine

maintenance is challenging –

particularly in large

enterprises with a large

number of distributed hosts.

Detection

Some solutions can

independently detect when a

critical process stops and/or

fails to restart following a

specific event, such as a

reboot.

Action

Smart Response can restart

individual processes, pulling

all relevant information, such

as the process name and

impacted host, directly from

the alarm.

Page 125: Siem & log management

125 | P a g e

Specific SIEM Script Implementation

Monitoring pastebin.com within your SIEM

For those who (still) don’t know pastebin.com, it’s a website mainly for developers. Its purpose is

very simple: You can “paste” text on the website to share it with other developers, friends, etc.

You paste it, optionally define an expiration date, if it’s public or private data and you are good.

But for a while, this on-line service is more and more used to post “sensitive” information like

passwords or emails lists. By “sensitive“, I mean “stolen” or “leaked” data. Indeed, pastebin.com

allows anybody to use their services without any authentication; it’s easy to remain completely

anonymous (if you submit data via proxy chains, Tor or any other tool which takes care of your

privacy)

In big organizations, marketing departments or agencies learned how to use social networks for

a long time. They can follow what has been said about their products and marketing campaigns.

In my opinion, it is equally important to follow what’s posted about your organization on

pastebin.com! Many people are looking for interesting data on pastebin.com from an offensive

point of view. Let’s see how this can also benefit to the defensive side.

For me, pastebin.com became an important source of information and I keep an eye on it every

day. But, due to the huge amount of information posted every minute, it is impossible to

process it manually. Of course, you can search for some keywords but it’s totally inefficient. In a

first time, I grabbed and processed some HTML content using the classic UNIX tools. Later, I

found a nice Python script developed by a few people:

9. http://ghosthunter.googlecode.com/svn/trunk/scripts/pastebin/pastebin.py

It also permits to reload the regular expressions without stopping it by receiving a

SIGHUP and to dump to the screen the pastes we have already found with SIGUSR1.

This is a sample output:

./pastebin.py ./file.txt

[!] My PID is: 9475

[!] Loading regular expressions

Dumping stored matches:

[!] Found Match. http://pastebin.com/raw.php?i=XXXXXXXX : @aol\.com [33

times] || @yahoo\.com [42 times] || @gmail\.com [729 times] || @hotmail\.com

[355 times] || [\w\-][\w\-\.]+@[\w\-][\w\-\.]+[a-zA-Z]{1,4} [5344 times] ||

Page 126: Siem & log management

126 | P a g e

@comcast\.net [1 times] || ;

[!] Found Match. http://pastebin.com/raw.php?i=XXXXXXXX : @comcast\.net [1

times] || @hotmail\.com [4 times] || @gmail\.com [11 times] || @yahoo\.com [12

times] || [\w\-][\w\-\.]+@[\w\-][\w\-\.]+[a-zA-Z]{1,4} [37 times] || ;

[!] Found Match. http://pastebin.com/raw.php?i=XXXXXXXX : [\w\-][\w\-

\.]+@[\w\-][\w\-\.]+[a-zA-Z]{1,4} [1 times] || INSERT INTO [1 times] ||

union.+select.+from [7 times] || ;

[!] Found Match. http://pastebin.com/raw.php?i=XXXXXXXXX : @yahoo\.com [2

times] || -- phpMyAdmin SQL Dump [1 times] || @gmail\.com [2 times] || [\w\-

][\w\-\.]+@[\w\-][\w\-\.]+[a-zA-Z]{1,4}

[6 times] || INSERT INTO [1 times] || CREATE TABLE [1 times] || ;

[!] Found Match. http://pastebin.com/raw.php?i=XXXXXXXXX : -----BEGIN RSA

PRIVATE KEY----- [1 times] || ;

End of dump

10. https://raw.github.com/xme/pastemon/3ffb31d5f1e3e9e40fe1f2daf4393def6f1

0fc6c/pastemon.pl

It checks continuously for data leaks on pastebin.com using regular expressions. I kept it

running for a while on a Linux box and it did a quite good job but I needed more! Xavier’s script

sends the found “pasties” on the console. It is possible to dump the detected pasties by sending a

signal to the process. Not always easy. That’s why I decided to go a step further and write my

own script! The principle remains the same as the script in Python (why re-invent the wheel?)

but I added two features that I found interesting:

It must run as a daemon (fully detached from the console) and started at boot time.

It must write its finding in a log file.

The next step sounds logical: If you have a log file, why not process it automatically: Let’s

monitor pastebin.com within your SIEM! If you find information posted on pastebin.com, it

could be very interesting to be notified (a great added-value for your DLP processes). My script

generates Syslog messages and (optionally) CEF (“Common Event Format“) events which can be

processed directly by an ArcSight infrastructure. Syslog messages can be processed by any SIEM

or log management solution like OSSEC (see below). It is now possible to completely automate

the process of detecting potentially sensitive leaked data and to generate alerts on specific

conditions.

First install the script on a Linux machine. Requirements are light: a Perl interpreter with a few

modules is required (normally all of them are already installed on recent distribution) and a web

connectivity to http://pastebin.com:80. If you are behind a proxy, you can define the following

environment variable; it will be used by the script:

Page 127: Siem & log management

127 | P a g e

# export HTTP_PROXY=http://proxy.company.com:8080

The script can be started with some useful options:

Usage: ./pastemon.pl --regex=filepath [--facility=daemon ] [--ignore-case][--debug] [--help]

[--cef-destination=fqdn|ip] [--cef-port=<1-65535>] [--cef-severity=<1-10>]

Where:

--cef-destination : Send CEF events to the specified destination (ArcSight)

--cef-port : UDP port used by the CEF receiver (default: 514)

--cef-severity : Generate CEF events with the very easy to process and can be specified

priority

(default: 3)

--debug : Enable debug mode (verbose - do not detach)

--facility : Syslog facility to send events to (default: daemon)

--help : What you're reading now.

--ignore-case : Perform case insensitive search

--regex : Configuration file with regular expressions (send SIGUSR1 to reload)

Once running, the script scans for newly uploaded pasties and search for interesting content

using regular expressions. There is no limitation on the number of regular expressions (defined

in a text file). To not disturb pastebin.com webmasters, the script waits a random number of

seconds between each GET requests (between 1 and 5 seconds). There is only one mandatory

parameter ‘–regex‘ which gives the text files with all the regular expressions to use (one per line).

If one of the regular expressions matches, the following information will be sent to the local

Syslog daemon:

Jan 16 14:43:24 lab1 pastemon.pl[29947]: Sending CEF events to 127.0.0.1:514 (severity 10)

Jan 16 14:43:24 lab1 pastemon.pl[29947]: Loaded 17 regular expressions from

/data/src/pastemon/pastemon.conf

Jan 16 14:43:24 lab1 pastemon.pl[29947]: Running with PID 29948

<time flies>

Jan 16 15:57:48 lab1 pastemon.pl[29948]: Found in http://pastebin.com/raw.php?i=hXYg93Qy

: CREATE TABLE (9 times) -- phpMyAdmin SQL Dump (1 times)

All matching regular expressions are listed with their number of occurrences. This can be easily

processed by OSSEC using the following decoder:

<decoder name="pastemon">

<program_name>^pastemon.pl</program_name>

</decoder>

<decoder name="pastemon-alert">

Page 128: Siem & log management

128 | P a g e

<parent>pastemon</parent>

<regex>Found in http://pastebin.com/raw.php?i=\.+ : (\.+) \(</regex>

<order>data</order>

</decoder>

The first regular expression is stored in the OSSEC “data” variable to be used as conditions in

rules. Here is an example: The rule #100203 will trigger an alert if some yahoo.com email

addresses are leaked in pastebin.com. (Note: This regular expression must be defined in the

script configuration file!)

<rule id="100203" level="0">

<decoded_as>pastemon</decoded_as>

<description>Data found on pastebin.com.</description>

</rule>

<rule id="100204" level="7">

<if_sid>100203</if_sid>

<description>Detected yahoo.com email addresses on pastebin.com!</description>

<extra_data>@yahoo\.com$</extra_data>

</rule>

If you have an ArcSight infrastructure, you can enable the CEF events support. The same event

as above will be sent to the configured CEF destination and port:

<29>Jan 16 15:57:48 CEF:0|blog.rootshell.be|pastemon.pl|v1.0|regex-match|One or more

regex matched|10|request=http://pastebin.com/raw.php?i=hXYg93Qy

destinationDnsDomain=pastebin.com msg=Interesting data has been found on pastebin.com.

cs0=CREATE TABLE cs0Label=Regex0Name cn0=9 cn0Label=Regex0Count cs1=-- phpMyAdmin

SQL Dump cs1Label=Regex1Name cn1=1 cn1Label=Regex1Count

To process the CEF events on ArcSight’s side, configure a new SmartConnector, a new UDP CEF

receiver and the events should be correctly parsed:

Page 129: Siem & log management

129 | P a g e

That looks great! But the next question is: “What to look for on pastebin.com?“. Well, it depends on

you… Based on your organization or business, there are things that you can’t miss. Here is a list

of useful regular expressions that I often use:

RegEx Purpose

company\.com Your company domain name

@company\.com Corporate e-mail addresses

CompanyName Company name

MyFirstName MyLastName Your full name

@xme Twitter account

192.168.[1-3].[0-255] IP addresses ranges

anonbelgium Hackers groups

#lulz, #anonymous, #antisec Trending Twitter hashtags

-----BEGIN RSA PRIVATE KEY----- -----BEGIN DSA PRIVATE KEY-----

-----BEGIN CERTIFICATE-----

Interesting data! (Public/Private Key Theft or encrypted data leak attempt)

-- MySQL dump Interesting dumps!

belgium My country

city My city

((4\d{3})|(5[1-5]\d{2})|(6011))-?\d{4}-?\d{4}-?\d{4}|3[4,7]\d{13}

Credit cards

Page 130: Siem & log management

130 | P a g e

Tracking Tweets in your SIEM

If pastebin.com may contain relevant piece of information as well as blogs, do not

underestimate the value of social networks! They are plenty of them: Google+, LinkedIn,

Twitter, Facebook, etc. Let’s focus on Twitter.

Why Twitter? The micro-blogging website is constantly used by more and more people and is a

good communication vector to spread information in the wild. As an example, announces made

by @anonymous or @lulzsec are relayed on Twitter. Your company is maybe present on Twitter (for

communication or helpdesk purposes) as well as your competitors!

The principle remains the same as for pastebin.com: a Perl script calls the Twitter API to track

your defined keywords and log them to a Syslog server in free or CEF format:

# ./twittermon.pl --help

Usage: ./twittermon.pl --config=filepath [--facility=daemon ] [--debug] [--help]

[--cef-destination=fqdn|ip] [--cef-port=<1-65535> [--cef-severity=<1-10>]

[--pidfile=file] --twitter-user=username]

[--twitter-pass=password]

Where:

--cef-destination : Send CEF events to the specified destination (ArcSight)

--cef-port : UDP port used by the CEF receiver (default: 514)

--cef-severity : Generate CEF events with the specified priority (default: 3)

--debug : Enable debug mode (verbose - do not detach)

--facility : Syslog facility to send events to (default: daemon)

--help : What you're reading now.

--pidfile : Location of the PID file (default: /var/run/pastemon.pid)

--config : Configuration file with keywords to match (send SIGUSR1 to reload)

--twitter-user : Your Twitter username

--twitter-pass : Your Twitter password

The configuration remains easy: You define what must be monitored and detected via a text file

(one keyword per line). But, there are limitations! Being based on the Twitter streaming API, the

script does not allow you to specify regular expressions but only words! Why? A first version of

the script implemented regular expressions and Twitter was accessed via the “sample” method.

In this case, the main issue was the relevance of the received feed! Only a sample of the full

activity was received. It is simply impossible to receive and process all Tweets posted on the

social network! By using the “track” method, Twitter will filter by itself the flow of messages and

send you only the relevant ones. The API does not support regular expression (even simple

Page 131: Siem & log management

131 | P a g e

wildcards). The second limitation is the length of monitored keywords. It cannot exceed 60

characters. The access to the Twitter API is handled via the Perl module AnyEvent: Twitter::Stream.

Twittermon is available at: https://raw.github.com/xme/twittermon/master/twittermon.pl

Monitoring WordPress Blogs within a SIEM (ArcSight Example)

A new script was created to be able to monitor WordPress blogs/websites on ArcSight. It is a

Perl script called "WPrssmon".

The script contains a “--url” argument, so it is a universal script and you can use it to monitor

the RSS feed of any WP blog/website.

In specific the script does the following:

runs continuously as a daemon

reads the RSS feed of a website

reads the latest posts/articles

searches for custom expression

generates Syslog messages about any interesting result

generates events to be handled by any SIEM (ArcSight) - optional

waits a random number of seconds and then reads again the RSS feed

it supports the usage of a proxy

It is already tested with ArcSight and it works fine. In the rare case you get any errors (like

"LWP:UserAgent"), maybe libwww-perl is not installed on your system:

Code:

$ sudo apt-get install libwww-perl

If your system is behind a proxy use the following command to define the proxy which will be

used by the script:

Code:

$ export HTTP_PROXY=http://proxy.company.com:8080

If this doesn't work for you, please replace the following line (twice):

Code:

$ua->env_proxy;

$ua->proxy(['http'], 'http://proxy.company.com:port/');

Alerting Code:

Page 132: Siem & log management

132 | P a g e

#MAIL ALERT

open(MAIL, "| mail -s ALERT webmaster\@aftershell.com ") || die "Mail failed: $!\n";

print MAIL "Alert by WPrssmon:$buffer\n";

close (MAIL);

WordPressMon is available at: http://www.aftershell.com/wp-

content/uploads/2012/02/WPrssmon.tar

Examples Log Analysis from many Log Sources 1. IDS/IPS Logs

2. Firewall Logs

3. Network Device Logs

4. Wireless Access Point Logs

5. OS and Authentication logs

Case Studies from Real Business Cases 1. Case Studies & Business Cases

2. The logs that made the difference

SIEM Based Intrusion Detection Attackers continue to become more skilled in their ability to penetrate organization’s networks.

Defenders need intelligent systems which provide meaningful data to detect advanced attacks.

SIEM solutions are great tools for any security team.

However, getting the most out of a SIEM solution requires focus on reporting, correlating and

analyzing events across security systems. This is especially important when looking at intrusion

detection. Today’s attacks routinely bypass signature based systems and, therefore, require

additional data sources beyond simply detecting specific attack traffic. Spending the time and

effort to fully develop the correlation and reporting aspects of a SIEM can dramatically improve

a team’s ability to detect a compromise. While this paper focuses on Q1Labs Qradar, the intent

is to provide rules and alerts which could also be used in other environments.

Introduction

There is no question that preventing attacks is the preferred outcome for security practitioners.

The problem is that no matter how much time and money are spent on prevention technologies,

eventually, prevention will fail.

Page 133: Siem & log management

133 | P a g e

“This principle doesn’t mean you should abandon your prevention efforts. As a necessary

ingredient of the security process, it is always preferable to prevent intrusions than to recover

from them. Unfortunately, no security professional maintains a 1.000 batting average against

intruders. Prevention is a necessary but not sufficient component of security.” (Bejtlich, 2004)

Unfortunately, detecting successful attacks is increasingly difficult due to the level of

sophistication and targeted nature employed in today’s attacks. Gone are the days of dealing

with simple defacements and script kiddies. Today’s attackers are highly organized and can be

well funded. Attacks have evolved from simply being an annoyance to having the potential for

significant financial impact to a business and even reaching the level of national security

concern. Mandiant, a security consulting firm for Fortune 500Corporations and the US

Government, categorizes intrusions into three different levels, each having a different purpose

and level of sophistication. See Figure 1 for details (Harms, 2008).

These increasingly complex attacks require much more than signature based solutions can

provide. Individual security solutions, such as antivirus, are often easily bypassed via various

techniques.

According to Graham Ingram, General Manager of the Australian CERT, “The most popular

brands of antivirus on the market… have an 80 percent miss rate. That is not a detection rate

Page 134: Siem & log management

134 | P a g e

that is a miss rate”. (Kotadia, 2006) While IDS does employ other techniques beyond signature

detection alone, their success rate can also be limited, especially after the initial compromise.

Attackers often use valid credentials to move around networks and transfer data using normal

methods. Both cases would be very unlikely for IDS to catch. Therefore, organizations need

systems which provide an overall view of the entire network.

Richard Bejtlich, in his book The Tao of Network Security Monitoring says: “defensible networks

can be watched. A corollary of this principle is that defensible networks can be audited.

‘Accountants’ can make records of the ‘transactions’ occurring across and through the

enterprise. Analysts can scrutinize these records for signs of misuse and intrusion.” (Bejtlich,

2004)

Organizations need to take input from various security solutions and correlate events in order to

detect potential compromises. Security Information and Event Management (SIEM) systems

provide this capability and should be leveraged in order to maximize efforts in detecting today’s

advanced threats.

While SIEM solutions offer many benefits to the overall security of an organization, they are

often not funded or prioritized most heavily based upon security.

A compliance requirement most often drives the purchase and implementation of these

systems. “The primary driver of the North American SIEM market continues to be regulatory

compliance. More than 80% of SIEM deployment projects are funded to close a compliance

gap.” (Nicolett, 2009) At first glance this may not seem to be a problem.

After all, funding a security project can be a major challenge given their cost to an organization.

However, SIEM projects funded by compliance will tend to be focused on compliance,

potentially at the expense of security. Marking the proverbial compliance check box and moving

on to other issues could be a costly mistake. Organizations with compliance requirements

should ensure that the project also impacts security operations and incident response before

considering a SIEM project successful.

SIEM solutions, including Q1Labs Qradar, typically offer both reporting and alerting capabilities.

Organizations should use both in detecting security incidents, however the decision about when

to use one over the other is a decision best made by each individual organization. For instance,

a large company with a 24 hour security operations center would likely want to employ more

alerting capabilities in order to limit the time between compromise and detection. However, a

smaller organization with a single information security staff member may decide that receiving

daily reports covering questionable activity may be the most effect method for prioritizing

security activities.

Page 135: Siem & log management

135 | P a g e

Regardless of the method of notification, the techniques discussed apply to both scenarios and

should work from either an alerting or reporting perspective. Additionally, while the Q1Labs

Qradar SIEM is our focus, these techniques should be effective for just about any centralized

logging infrastructure and even systems with logs in multiple places. The only requirement is to

be able to query data across multiple systems and data stores.

Another pitfall SIEM implementations face is not taking into account the post-implementation

human resource requirements. SIEM vendors tout various alerting capabilities and correlation

engines, however no implementation can be successful without being tuned and tailored to the

needs of the organization. This is especially true when tuning the system to detect incidents.

Effective detection requires knowledge of the existing infrastructure within the organization.

After presenting a previous paper on SIEM implementations, I was approached by numerous

people who ultimately had the same issue.

“We’ve implemented a SIEM, now what?” People seemed to be inundated with alerts and

frustrated with the lack of reliable and actionable data. Regardless of the system being

implemented, organizations should be able to create custom reports and alerts to detect attacks

accurately and efficiently.

The following examples come from real world experiences managing a multi campus university

network. In order to provide additional context, sections include real world examples using

Q1Labs Qradar to detect an intrusion based upon the techniques discussed. The security

challenges on a university network are very interesting as various constituents have different

security requirements.

While there are university owned systems on which we can impose security controls similar to

our corporate counterparts; we also have residence halls which we must provide network access

to non-university owned computers. The challenge is detecting and responding to compromise

accurately and efficiently. Qradar is the primary resource for accomplishing this task.

System Setup

SIEM solutions can certainly mean different things to different people. For the purposes of our

discussion, a SIEM will be a system capable of receiving logs from virtually any device, operating

system or application in the enterprise. Most people think of network security related items

such as firewall, intrusion detection and VPN logs first when considering SIEM. This is perfectly

fine, but organizations need not limit themselves to these technologies. The goal should be to

have each and every log in the enterprise collected in the SIEM.

Devices and applications which do not necessarily have a focus on security still can add

significant value during an investigation. In addition to log collection, ideally a solution will

include the collection of session data and access to full content network captures.

Page 136: Siem & log management

136 | P a g e

SIEM solutions do not require session data, also called flows; however the ability to access this

information can dramatically improve the capability of the system.

“The basic elements of session data include the following: Source IP, Source port, Destination IP,

Destination port, Protocol (e.g., TCP, UDP, and ICMP), Timestamp, generally when the session

began and measure of the amount of information exchanged during the session.” (Bejtlich,

2006)

The most common flow records are Cisco’s Netflow, however there are several other options

including sFlow and Jflow. These technologies collect traditional session data without any

application data. There are also more specialized NBAD products such as Q1Labs Qflow

collectors which allow for capture of a portion of the application data within the flow record.

This can greatly assist in determining whether an anomaly is an incident or false positive.

Full content network captures are typically not built directly into SIEM products.

However, SIEM solutions often have capabilities to integrate with systems providing full content

captures.

Page 137: Siem & log management

137 | P a g e

For example, NetWitness NextGen products provide full content network captures for network

forensics purposes. NetWitness has an application called SIEMLink designed to integrate with

an organization’s SIEM. “SIEMLink is a lightweight Windows application designed to act as a

transparent, real-time translator of critical security event data between Web-based consoles,

such as security event and information management (SIEM) systems and network and system

management (NSM) programs.” ("Netwitness (total (network,"(2009)

Collecting full content network captures is certainly not a simple task due to issues of

performance, storage and politics. However, if organizations can address the issues involved,

full content network captures will provide significant benefits in explaining what really

happened during a potential incident. “By keeping a record of the maximum amount of network

activity allowed by policy and collection hardware, analysts buy themselves the greatest

likelihood of understanding the extent of intrusions.”

(Bejtlich, 2004) For the purposes of this discussion we will assume the SIEM has access to log

and flow data, leaving full content network captures as an optional method for further

determining the extent of an intrusion.

Suspicious Traffic and Services

Developing various reports and alarms for intrusion detection can seems like an overwhelming

task. The best approach is to start with some simple alerts and build more advanced

correlations from there. These reports are very simple to create, yet still can be highly effective

in locating compromised machines. While many organizations are likely blocking much of the

traffic discussed, reporting on its attempted usage is still valuable to detect successful attacks.

In fact, one could argue that traffic which is prohibited by policy or technical controls, yet still

exists on the network, may be more indicative of malicious activity.

SMTP, IRC and DNS

A great place to start is outbound SMTP traffic. “Keep an eye out for a massive amount of SMTP

outbound traffic. Such patterns, especially coming from machines that are not supposed to be

SMTP servers, will likely point to a malware spam bot that has implanted itself in your

organization.” ("Shadowserver foundation information," 2009)

Monitoring outbound email traffic, regardless of whether the traffic is allowed or blocked by the

firewall, is a highly effective method for detecting compromised hosts. This can be done by

monitoring firewall or flow logs. Create a report or rule to monitor any outbound traffic

destined for port 25. However, be sure to exclude valid SMTP senders such as mail servers, web

servers which email forms and vulnerability scanners. Daily reports covering the previous 24

hours are effective or rules can be created to flag an alert after a certain threshold has been

Page 138: Siem & log management

138 | P a g e

crossed. I’ve used daily SMTP reports for years in a university dorm network with very high

success rate. Standard practice for our team is to assume any machine generating 250 or more

SMTP events in a 24 hour period is compromised. Most often, the numbers will be much higher,

likely in the thousands of events.

Internet Relay Chat is a protocol used to chat via the Internet, most often by technically oriented

people. IRC also is “one of the very first types of botnet: bots were controlled via IRC (Internet

Relay Chat) channels. Each infected computer connected to the IRC server indicated in the body

of the bot program, and waited for commands from its master on a certain channel.” (Kamluk,

2008) IRC uses a range of ports, but most often port 6667. The existence of IRC traffic alone is

not a guarantee of malicious activity as IRC is still used for legitimate communications. An

effective method for determining what traffic is malicious is simply asking the user of the

computer if they know what IRC is or if they are using it. Regardless, monitoring outbound

traffic to ports 6660-6669 from firewall logs is still a good idea to detect potentially

compromised machines.

DNS activity is most certainly not malicious by itself. However, only DNS servers should be

communicating externally via DNS. Client workstations or non-DNS servers should not, and

therefore, may be a sign of compromised machines. Specific Trojans, called DNS changers, are

designed to change a host computer’s DNS server settings so clients resolve domains from

external servers and can be redirected to malicious sites.

“Check the machine's default DNS resolution servers. Are they what you would expect to see (a

company's or ISP's DNS servers, or that of your internal LAN's router?) If not, malware may be

redirecting DNS requests to a shady source.” ("Shadowserver foundation information," 2009)

Instead of checking individual machines, an organization can use SIEM to monitor their entire

organization. Create a SIEM rule or daily report to monitor outbound traffic to port 53,

excluding DNS servers.

Any systems which show up on the report should be investigated further for potential

compromise. These three methods for detecting compromised machines are certainly very

basic, but can still be effective and should be the start of any log based intrusion detection

planning.

A simple report, generated every 24 hours, with these three criteria is a good starting place. The

following report is an example of a daily user defined report in Qradar detailing external SMTP

activity (destination port 25) by source IP address.

Addresses for known SMTP servers are excluded. This report identified three infected clients

located in our student dormitories; source IP addresses have been removed.

Page 139: Siem & log management

139 | P a g e

Suspicious Outbound Internet Traffic

In addition to looking for potentially malicious traffic across the network, incident detection

plans should also look for general traffic which does not fit the profile of the originating

machine. For instance, there are certain systems and devices that really should not be making

outbound internet connections. For example, printers likely should not require outbound

internet access and can be used for malicious purposes. “Network printers have long been used

as jump off points in exploiting networks and for storage of hacking tools and data.” (Danford,

2009)((Simple alerts can be created to notify the analyst of outbound connections from these

devices, assuming they are properly defined and segregated within the enterprise.

Embedded devices also pose a potential risk and likely do not require extensive Internet access.

Unfortunately, these devices are also often lacking the patch management procedures of their

PC and server counterparts. The Conficker worm garnered much attention during 2009 and for

good reason. Our network went relatively unscathed to the worm due to timely patching. The

one area we did see compromised machines were several Windows embedded devices used for

controlling classroom technology equipment. In fact, we believe the initial infection point was a

device which was shipped to us pre-loaded with Conficker. These compromises were detected

because the embedded system began attempting connections outside of their permitted

network segment.

Organizations which segment their devices properly can use Qradar to monitor these network

segments for suspicious activity. This could be both external Internet access or even simply

network connections into or out of the local subnet. Some organizations will filter these subnets

with firewalls or router ACLs. In this case, Qradar has a built in rule for watching denied

connections called “Recon: Excessive Firewall Denies from Local Host”. This rule will fire if a

single source IP creates 40 denied connections to at least 40 destinations in five minutes. In

order to create a rule for specific segments, the rule can be copied, then modified to include

specific source IP ranges and different detection settings if so desired.

Servers are another area which you can use system profiling of network activity to detect

compromise. Do the vast majority of your servers really need to make outbound Internet

connections? For those that do required Internet access, to how many addresses or domains?

This report will take a bit more time to put together as you’ll likely need to filter out a few

updates sites, outgoing SMTP for mail relays, etc.

This is an area I would recommend getting system administrators involved if possible. Do some

initial tuning of the obvious false positives, then ask system administrators to review a quick

daily report of the systems for which they are responsible and alert the security team to any

Page 140: Siem & log management

140 | P a g e

anomalies. If possible, automate the process to create the report and create a regular call in the

IT ticketing system. Certainly, some occasional audits to make sure the process is being

followed will be required. However, getting non-security focused staff involved in the process

only helps build more awareness of security related issues.

New Hosts and Services

The favorite saying of one of my colleagues is “What’s changed”. His question anytime

something isn’t working correctly. This analogy very much applies to intrusion detection. New

services can be indicative of recently installed backdoors or accidentally installed services which

could become targets for new attacks. New hosts on the network could be a rouge device like a

wireless access point or a non-standard workstation. While these events are not a guarantee of

compromise; removing non-sanctioned devices can help prevent future attacks.

Qradar can collect this data from a variety of sources including integration with port scanners

such as Nmap, or vulnerability scanners like Nessus or Qualys and the collection of “passive”

data based upon flows. The simplest and one of the most effective methods is integration with

an already existing Nmap scanner.

This integration can be accomplished by defining your Nmap scanner using the VA Scanner

button on the Admin tab of Qradar. Once the scanner has been setup with the proper

credentials, scans can be scheduled from with the VA Scan section of the Asset Tab. Qradar will

log onto the defined Nmap scanner, launch the scan, retrieve the results and publish the data

within the appropriate asset record. Figure 4 shows a sample asset record after being scanned

with Nmap.

Qradar includes a standard rule to detect the presences of a new service in the DMZ. The

“DMZ” is a network object which is defined during the tuning phase of a Qradar deployment.

However, this rule can be easily modified or duplicated to watch for changes to any network

segment or specific hosts. Figure 5 shows an offense generated when Qradar detected a new

service in the DMZ based upon regular port scans using the Nmap integration. Double clicking

the “new port discovered” event (see arrow) would provide details regarding the port

discovered which in this can was port 3389 (Microsoft Remote Desktop).

The detection of new devices using Qradar uses the same Nmap scan data and another pre-

defined rule. From a security perspective, this rule can be used to find rogue devices such as a

rogue access point. In addition to finding rogue devices these scans are highly effective in

finding devices with improperly configured network settings. Many organizations subnet

various devices to provide additional separation and security controls. Devices such as printers,

embedded systems, HVAC, etc. often have their own network segments.

Page 141: Siem & log management

141 | P a g e

In the university environment keeping student owned computers on the proper VLANs and

segregated from faculty and staff networks is important. In addition to the canned rule Qradar

provides, organizations can increase the effectiveness of their scans by looking for specific

devices or the existence of devices which do not match what should exist in the subnet. Qradar

can detect and exclude an operating system by adding the following criteria to any rule. From

within the “Rule Test Stack Editor” in Qradar, select the Test Group “Event Property Tests” and

select the criteria “when the username matches the following regex”. Next, change “username”

to “OS” and change “regex” to a regular expression appropriate for the operating system you

would like to detect or exclude. However, I prefer to determine operating system rules based

upon open/close ports on the asset as this method has proven more effective during our testing.

For instance, an organization with subnets dedicated to VOIP handsets could create a custom

rule similar to Figure 6. In this case, VOIP phones used do not have any listening ports;

therefore the rule detects the presence of a device in the VOIP subnet with an open port. If

such a device is detected, an offense is generated for investigation.

Conversely, the custom rule in Figure 7 is designed to catch any non-windows device inside of a

subnet designated for windows-based computers. This helps to detect rogue access points,

printers in the wrong subnet and non-windows personal devices.

Page 142: Siem & log management

142 | P a g e

Darknets

Darknets are a classic method for detecting suspicious traffic. The concept is quite simple.

Create network segments inside your infrastructure which are routable, however have no

systems or devices setup to use the local network. Therefore, no system on your network

should be attempting to access anything within the Darknet.

“Any packet that enters a Darknet is by its presence aberrant. No legitimate packets should be

sent to a Darknet. Such packets may have arrived by mistake or misconfiguration, but the

majority of such packets are sent by malware. This malware, actively scanning for vulnerable

devices, will send packets into the Darknet, and this is exactly what we want.” ("The Darknet

project,")

Qradar can monitor traffic events and flow records to watch for systems attempting to access

predefined Darknet addresses. During the initial setup of your network hierarchy, Darknets can

be defined. Qradar will then monitor these network segments based upon the rule “Suspicious

Activity: Communication with Known Watched Networks” and generate an offense accordingly.

While the default rule will work, it includes both watched network lists and Darknet addresses in

the same rule.

I prefer to have these separate and therefore create a customized rule for these categories.

The example in Figure 8 is not only an example of the Darknet address rule firing, but also a

good example of how Qradar correlates various suspicious network activities across both events

(firewalls in this case) and flow data. In addition, due to integration with identity information

the username of the person currently logged into the internal, attacking host is also displayed.

Page 143: Siem & log management

143 | P a g e

Authentication, Accounts and Remote Access

While attacks continue to evolve in their complexity, compromised accounts continue to be

effective method for intruders. Compromised accounts may not always be the initial attack

vector, however they are often used to move throughout the organization or elevate the

privileges of the attacker.

Therefore, proper auditing of authentication attempts, account changes and access tracking can

be highly effective in detecting intrusions. Please note that each time a Windows event ID is

discussed there will be two numerical entries. Three digit entries correspond to Windows XP or

Server 2003. Four digit entries apply to Windows Vista, 7 and Server 2008.

Page 144: Siem & log management

144 | P a g e

Brute-force Attacks

According to the SANS Institute Top 20 list, “Brute-force attacks against remote services such as

SSH, FTP, and Telnet are still the most common form of attack to compromise servers facing the

Internet.” ("SANS: Top Twenty," 2007) SIEM is a perfect place to collect failed authentication

attempts which could be indicative of a brute force attack. Q1Labs Qradar will take

authentication events from a variety of sources such as SSH, FTP, Linux and Windows, and

categorize events together so reports and alarms are easily developed. This categorization

process is known as normalization.

A good first step in identifying brute-force attacks is to understand what is typical for your

organization. Gathering statistics regarding daily failed login attempts by device is a great

statistic to have. Once you understand what’s normal for your organization, identifying attacks

can be much easier. Creating this report is straightforward inside of Qradar. Create an event

search covering the past 24 hours using the following criteria: Category = Authentication (High

Level) and “Failed Authentication” (Low Level).

Under the “Column Definition” section of the event search, have the data sorted by device

based upon the sum of the event count from high to low. Another helpful variation of the above

report would be to sort the data based upon source IP address rather than device.

Organizations that lock accounts for a period of time after a certain number of failed

authentication attempts may also find daily statistics on locked accounts useful.

Organizations with Active Directory can accomplish this by reporting on Windows event ID

644/4740 or, within Qradar, create an event search with log sources of Active

Directory controllers and an event name of “User Account Locked Out”. Save the event search

and create a daily report based upon the search.

This data, collected over a period of time, should provide enough data to estimate how many

accounts generally are locked within a 24 hour period. Use this baseline to compare reports and

look for problems or create an offense watching for more than X number of locked accounts

within a given timeframe. These numbers will be unique to each organization, however setting

this kind of an alert will help with early detection of brute force attacks.

Beyond daily reports, Qradar provides several out of the box rules aimed at identifying brute-

force password attacks.

The general purpose of these rules is to detect a certain number failed login attempts followed

by a successful login. These rules are highly effective for systems without significant login

activity such as routers and firewalls. However, detection is much more difficult on high activity

systems such as a web-based email server. Separating a brute-force attack from a user who

forgot to change their Blackberry password is challenging.

Page 145: Siem & log management

145 | P a g e

The following offense shown in Figure 9 was generated by Qradar based upon a successful

brute-force attack against a Cisco device using SSH. The rule which fired is using the canned

logic for brute force attacks, however is customized to watch for attacks specifically aimed at

network equipment and alerts are sent to both the network and security teams. Customizing

the rules for specific device helps separate alerts to help in prioritization.

Detecting brute-force attacks against general user accounts is certainly important, however

detecting attacks against privileged accounts is critical. A good starting point for detecting these

attacks is to create a list of accounts throughout the organization with elevated privileges. The

list should include obvious accounts such as root and administrator, but also system level

accounts used for services and database access.

Windows shops will want to include any account with domain/enterprise admin access as well.

In fact, one could argue that most IT staff will have some kind of elevated privileges worth

monitoring. Custom brute-force rules can then be developed to look for attacks on privileged

accounts. Copy the built-in Qradar brute-force rules and add additional requirements such as a

Page 146: Siem & log management

146 | P a g e

list of critical usernames and/or system logs. You may also consider tuning the brute force

parameters for number of attempts in a given timeframe to be more sensitive given the

accounts being watched.

Windows Account Creation and Permissions

Detecting brute-force attacks and various authentication events is one method for detecting a

compromised account. Another method for detecting potential compromise is tracking account

creation and privilege changes within an Active Directory domain. These events can help detect

when an attacker is attempting to increase their privileges or access sensitive resources.

However, monitoring account activity within an Active Directory domain really requires logging

beyond just Active Directory controllers.

Collecting Windows event logs from all systems, including individual workstations, is the best

way to get a full picture of what is occurring on an organizations windows network. The

following examples assume collection from all sources within the domain.

Creating accounts, while not necessarily the initial intrusion vector, can be a key method for

maintaining access. However, creating an account is also one of the most basic functions within

IT. Separating the malicious from the mundane can be difficult.

The most basic method would be to create a daily report listing the account creations and what

account was used to create the account. This report can be automatically created using

Qradar’s built in event search and emailed to system administrators for review. If your

organization has a strict naming convention, such as six characters followed by two digits, a rule

could be created to flag account creations which do not meet organizational criteria. Since

many organizations have automated their employee account creations utilizing nightly scripts or

automatic triggers from another system, rules could be created to list accounts created outside

the nightly script timeframe or created outside of the automated process.

Security teams need to understand their organization’s account creation process and build rules

based upon their specific requirements in order to best detect rogue account creations.

Most organizations are likely to prohibit or discourage the creation of local accounts.

Organizations those that do, should consider creating an offense for the creation of a local

account. This rule will help detect policy violations, but also attacks in which a local account is

created in order to maintain or elevate access.

Next, let’s take a look at changes to an account’s privileges focusing on Windows environments

and Active Directory. Windows will log various events of interest in detecting attempts to

change access permissions for an account. Qradar can be used to create a daily report of these

Page 147: Siem & log management

147 | P a g e

event IDs for review or create an offense for immediate review. There are going to be

numerous events which are valid during normal operations.

The key is identifying important areas to focus on. Most Active Directory domain permissions

come from group membership, therefore monitoring changes to key groups is important in

detecting intrusions. A good starting place is to create a rule inside of Qradar to alert when

changes to key groups are made.

The following custom rule in Figure 10 watches logs for a member being to a global group and

then checks for “Domain Admins” within the payload of the event. Thus, whenever an account

is added to the Domain Admins group, an offense is generated inside of Qradar and the

appropriate staff notified via email. Organizations can add other key security groups to this rule

for further coverage. Additional rules can also be created for local groups to cover items such as

local administrator access to PCs.

Foreign Country Logins

Another method for detecting intrusions using valid credentials is to use SIEM to correlate logins

with geo-location data.

Page 148: Siem & log management

148 | P a g e

“One of the use cases we tackled was the monitoring of login attempts from foreign countries.

We wanted to keep a particularly close watch on successful logins from countries in which we

don't normally have employees in… we had to have the ability to extract usernames and IP

addresses from these logs; and, we had to have the ability to map an IP address to a country

code.” (Bejtlich, 2008)

Qradar is capable for providing similar data. First, within the rules section of the offense tab,

edit the building block titled “Category Definition: Countries with no Remote Access” and enter

countries where logins should not come from. Next, enable the rule “Anomaly: Remote Access

from Foreign Country” and any login events from banned countries will become offenses. If

needed, you can also further customize the alert to specific log sources. For instance, in our

University we would not be able to track international logins for our web-based email given the

large contingent of international programs and students.

However, monitoring our VPN and limiting access to only those areas with current operations

provides valuable data. Certainly, the value of this capability depends upon the organization in

question. However, organizations which are predominantly domestic or do business in a limited

number of countries may find the services helpful. Also, organizations may be able to target

logins from specific countries known to be hotbeds for malicious activity.

Adding Context and Correlation to IDS Alerts

IDS and IPS solutions whether open source or commercial, can create a mountain of alerts to

classify and respond to.

“On any given network, on any given day, Snort can fire thousands of alerts. Your task as an

intrusion analyst is to sift through the data, extract events of interest, and separate the false

positives from the actual attacks.” (Beale, Baker, et al, 2006)

SIEM can help greatly in dealing with IDS alerts as they typically offer a variety of options for

reporting on and analyzing IDS alerts. This can be done in a variety of ways including different

methods of reporting, adding knowledge of the target operating system or applications and

including data regarding vulnerabilities which exist on the target system.

First, consider the benefit of customized reporting a SIEM can provide. IDS can be a noisy

technology and focusing effort on the most critical alerts can help find events worth

investigating. For example, Qradar can be used to filter IDP events and create a report for high

value systems within the organization. Qradar allows users to assign an “Asset Weight” to each

asset inside of the asset profile shown in Figure 4. These asset weights can then be used in

various event searches and reports. For instance, an offense or alert could be created anytime

Page 149: Siem & log management

149 | P a g e

an exploit is seen against a high value target. This can help prioritize security analyst’s time

appropriately.

Commercial IDS systems likely have solutions capable of doing this work for you; however why

not do so in a centralized console where the rest of the organizations logs and alerts reside.

Also, due to support for multiple IDS solutions, Qradar can allow central collection and

correlation across multiple kinds of sensors.

Second, look for ways to add knowledge of the target operating system or installed applications

in order to make the report more effective. Qradar will parse supported IDS logs and then

categorize events based upon data provided. This can allow users to filter out events which are

not valid for their organization. For instance, one could create a report filtering out “Unix”

alerts for your subnets containing Windows servers or email a report of web applications attacks

detected to the web development team. Third, further data may be gathered via integration

with vulnerability scanners.

“Keep a list of vulnerable systems and refer to it when attacks occur. If you know your host is

not vulnerable to a particular attack, you can rest assured that the attack was not successful.”

("The Truth about," 2001)

Assets must first be scanned with a supported vulnerability scanner from within Qradar. Next,

Qradar watches intrusion detection logs for exploits targeting systems which are vulnerable to

the attack. Qradar has several canned rules to help provide this capability. The system includes

rules for “Target Vulnerable to Detected Exploit, Target Vulnerable to Detected Exploit on

Different Port and Target Vulnerable to Different Exploit than Detected on Attacked Port”. The

rule “Target Vulnerable to Detected Exploit” obviously has strong value; however don’t discount

the other rules. IDS systems may not always be able to detect the exact exploit an attacker is

delivering, but may still detect malicious activity relating to the attack such as the existence of

shellcode or protocol anomalies.

Web Application Attacks

Web application attacks are a key vector for compromise in today’s enterprise networks.

Attacks such as SQL injection allow attackers to take control of internal resources via vulnerable

public facing web applications. Once the attacker has access to the internal database server, he

can attack other internal resources or exfiltrate sensitive data.

The Verizon Business 2009 Data Breach Investigations Supplemental Report states that SQL

injection was a “factor in 18% of breaches in caseload” and a “factor in 79% of records

Page 150: Siem & log management

150 | P a g e

compromised in caseload”. (Baker, Hylender, & Valentine, 2009) (Cross-site scripting attacks

allow attackers to compromise client systems visiting trusted resources.

Compromise could occur on internal corporate systems or customers visiting your organizations

web site.

Certainly the most effective method for dealing with web applications attacks is proper

development practices. The goal should be to eliminate these vulnerabilities. However, if

vulnerabilities do exist, whether known or unknown, how do we detect attacks against them?

Web application attacks are very challenging as they use the same ports and services to conduct

malicious activity as are used for non-malicious activity.

From a logging perspective, there are several options for monitoring logs for malicious activity.

Logs can of course be collected from the web server itself and is certainly the most common

location. However, web server logs have one major disadvantage. “Web server logs do not

contain any data sent in the HTTP header, like POST parameters. The HTTP header can contain

valuable data, as most forms and their parameters are submitted by POST requests. This comes

as a big deficiency for web server log files.” (Meyer, 2008) Another, more effective, location for

generating valuable web application log files is a web application firewall. “WAF log files contain

as much information as those from a web server plus the policy decisions of the filter rules (e.g.

HTTP request blocked; file transfer size limit reached, etc.).

A WAF provides a wealth of information for filtering and detection purposes and is thus a good

place for the detection of attacks.” (Meyer, 2008) Organizations with a WAF in place, which is

supported by their SIEM, should consider doing their log analysis on those log files. However, a

WAF does require additional investment and is not an option for all organizations. Regardless,

organizations can analyze whatever logs are available to detect many common web attacks.

SQL Injection and Cross Site Scripting

Detecting SQL injection and cross site scripting attacks via web server logs can be challenging

due to the propensity for false positives and ability for attackers to encode attacks. Therefore,

some knowledge of the organization’s applications will be helpful in tuning detection methods.

Detection of web application attacks will focus on patterns known to be SQL injection or cross

site scripting attacks. Qradar allows for searching the payload of log files based upon regular

expressions. Therefore, the analyst can create log searches or alerts looking for specific attacks.

The following regular expressions were published on securityfocus.com. (Mookhey, 2004)

SQL Injection Patterns in Logs

• /(\%27)|(\')|(\-\-)|(\%23)|(#)/ix

Page 151: Siem & log management

151 | P a g e

This regular expression will detect the comment characters, single quote (MSSQL) and double-

dash (Oracle) and their hexadecimal equivalents. These characters are used to terminate

queries and often part of SQL injection attacks.

• /((\%3D)|(=))[^\n]*((\%27)|(\')|(\-\-)|(\%3B)|(;))/i </TD< tr>

“This signature first looks out for the = sign or its hex equivalent %3D. It

Then allows for zero or more non newline characters, and then it checks for the single quote,

the double dash or the semi colon.”(Mookhey, 2004)

• /\w*((\%27)|(\'))((\%6F)|o|(\%4F))((\%72)|r|(\%52))/ix </TD< tr>

“\w* - zero or more alphanumeric or underscore characters

(\%27)|\' - the ubiquitous single-quote or its hex equivalent

(\%6F)|o|(\%4F))((\%72)|r|(\%52) - the word 'or' with various combinations of its upper and

lower case hex equivalents.” (Mookhey, 2004)

• /((\%27)|(\'))union/ix

This will detect the single quote in ASCII or hex followed by the Union keyword.

Other SQL commands can be substituted for union.

• /exec(\s|\+)+(s|x)p\w+/ix </TD< tr>

This regular expression is specific to Microsoft SQL environments and will detect

the EXEC keyword signifying that a Microsoft stored procedure is to be run.

Cross Site Scripting

• /((\%3C)|<)((\%2F)|\/)*[a-z0-9\%]+((\%3E)|>)/ix

This regex will detect simple XSS attacks looking for HTML opening and closing

tags with text in between and their hex equivalents.

• /((\%3C)|<)((\%69)|i|(\%49))((\%6D)|m|(\%4D))((\%67)|g|(\%47))[^\n]+((\%3E)|>

)/I </TD< tr>

This regular expression will detect the “<img src” XSS attack

• /((\%3C)|<)[^\n]+((\%3E)|>)/I </TD< tr>

“This signature simply looks for the opening HTML tag, and its hex equivalent, followed by one

or more characters other than the newline, and then followed by the closing tag or its hex

equivalent. This may end up giving a few false positives depending upon how your Web

application and Web server are structured, but it is guaranteed to catch anything that even

remotely resembles a cross-site scripting attack.” (Mookhey, 2004)

Page 152: Siem & log management

152 | P a g e

Web Application Honey Tokens

Beyond using regular expression to look for web application attacks, organizations have another

option for detecting when someone is attempting to compromise one of their web applications.

Web application honey tokens are intended to create data or portions of the web site which no

normal activities should ever access. Therefore, if these fake items are accessed one can

assume that an attacker is attempting some kind of malicious activity.

First, create a fictitious “administration” web page and add the link to the disallowed section of

the web server’s robots.txt file. After this is in place, check web server logs for anyone accessing

the robots.txt file and later accessing the fake administration page. Second, add fake

authentication credentials into the html source of a specific web page.

Use the SIEM to query for anyone attempting to login using these credentials. Any IP addresses

which attempt either of these two activities should be investigated further, added to your watch

list and potentially blocked. Figure 11 shows an alert created to detect the robots.txt web

honey token example. The rule also highlights the Qradar capability of creating multiple custom

rules (red circles) and combining them into a function.

Page 153: Siem & log management

153 | P a g e

Data Exfiltration

Ideally, compromised systems would be detected and remediated quickly enough to limit any

exposure of sensitive data. However, that is not always possible. Also, after a compromised is

detected, analysts need to be able to determine if any data left the system in question or if the

attacker used the compromised system to attack other targets.

Security teams need to have systems in place to monitor network traffic effectively enough to

address these challenges. The collection of session data at various locations throughout the

network can be a tremendous help in achieving this goal.

The use of encryption in documents, archives and communications channels can make detection

of sensitive data leaving the organization difficult with signatures and pattern matching alone.

However, session data can still be used to determine in general terms what the attackers next

steps were on the network. Large outbound transfers may point the investigation towards

determining what data left the system.

Conversely, internal network traffic after exploitation may lead the incident response team to

other targets. Therefore, one of the first steps after an incident has been declared is to collect

as much session data as possible for all systems involved before, during and after the incident.

Qradar provides a search capability for flow data in the same manner as standard log files can

be searched. There are a multitude of options available including searching based upon IP

address, ports, applications, number of bytes, flow direction, etc.

Session data is not only helpful after an incident has been identified, but can also be the reason

for detecting an incident in the first place. For instance, why would a DNS server transfer a

50MB file outbound to a free file sharing service using SSL? Session data could highlight this

anomalous activity assuming the correct reports or alerts have been developed. Intrusion

detection based upon the amount of data transferred during a session will most often focus on

outbound file transfers. Analysts will be looking for large outbound flows or flows with

questionable destinations.

The idea being, anyone intent on stealing corporate data must somehow transfer the data

outside the network. If not done via physical means, then the Internet is the most likely option.

Checking for session data is a great indicator of compromise because attackers cannot cover

their actions via encryption. Qradar provides an alerting mechanism for network activity called

sentries. Many sentries are created with the installation of a new system and custom sentries

can also be developed.

Creating reports and alerts for intrusion detection based upon the size or destination of flows

will take consistent tuning. Clearly, there are numerous possibilities for false positive. Also, the

type of machine involved in the connection may also help in determining the likelihood of

Page 154: Siem & log management

154 | P a g e

compromise. Again, the example used previously about a server making an outbound

connection applies. Analysts will need to tune for false positives by identifying update sites,

outsourcing relationships and commonly used services. Having some form of content data can

be a significant help in quickly tuning false positives. Qradar includes a sentry to detect External

– Large Outbound Data Flow”.

This sentry can be used to create an offense for such activity or add events to an existing

offense to alert to possible data exfiltration after an incident. Another technique for detecting

intrusions based upon the size of network sessions is to look for large amounts of application

data inside of protocols which should have a limited size. Detecting these activities could be

indicative of a covert channel.

ICMP is an example of where this may apply. “Excessive amounts of data in ICMP traffic may

indicate use of covert channels, especially if the content appears obscured. ICMP data that

cannot be decoded is probably encrypted and encrypted content is a sure sign of a covert

channel.” (Bejtlich, 2006) Sentries are available for several protocol related anomalies including:

Unidirectional ICMP traffic Frequency of requests can be another indicator of compromise.

Security analysts need to understand what is “normal” for their network. Spikes in traffic or

specific protocols should be a warning flag that something, possibly security related, is amiss on

the network. These techniques can apply to the entire network, sub-networks or even specific

hosts.

“A covert channel may bear the headers and fields needed to look like DNS, but the content may

be malicious. An internal workstation making very frequent DNS request may not be doing so

for business purposes” (Bejtlich, 2006).

Qradar offers many options configuring this kind of statistical data. Creating a statistical report

of application usage is a good starting point. Once a baseline is developed, Qradar does have

some capability to develop reports based upon deltas, or changes, in the data which could be

very helpful in detecting anomalies.

Session duration is the final method for detecting intrusions from flow data.

Protocols known for short session lengths could be analyzed for longer sessions in order to

detect a possible covert channel. HTTP is a good example of a protocol that meets these

criteria.

“Web connections are usually short, even when HTTP 1.1 is used… A

Web connection generally lasts several seconds. If an analyst notices a sustained outbound

Web connection, something suspicious may be happening. An intruder may have crafted a

custom covert channel tunneled over port 80.” (Bejtlich, 2006)

Page 155: Siem & log management

155 | P a g e

Qradar comes with a predefined sentry called “Policy – External – Long Duration Flow

Detected”. This rule will fire after a flow’s duration has exceeded 48 hours. Tuning with content

A report detailing outbound flow data flagged a FTP connection where approximately 1.3

Gigabytes of data was transferred outbound from a University employee’s computer late at

night. The destination IP address did not resolve when queried via DNS.

In addition, a WHOIS lookup on the IP address did not produce any relevant details. This

situation would have been fairly labor intensive to resolve if not for the partial application data

collected with the flow in question. Instead of having to further an investigation, we were able

to quickly identify the transfer as non-malicious transfer.

The highlighted portion shows the file name of the upload. Since the file was uploaded from a

basketball coach’s computer, a few hours after a scheduled basketball game, we can be very

confident that this exchange was not malicious. In addition, we can add the IP address to our

whitelist of known good transfer IPs so future reports will not flag these events.

Page 156: Siem & log management

156 | P a g e

Page 157: Siem & log management

157 | P a g e

Detecting Client Side Attacks

Client side attacks are one of the top methods for a successful intrusion. Instead of an attacker

targeting a server service directly, client side attacks are made possible by internal clients

visiting malicious websites or content.

“These are attacks that target vulnerabilities in client applications that interact with a malicious

server or process malicious data. Here, the client initiates the connection that could result in an

attack. If a client does not interact with a server, it is not at risk, because it doesn’t process any

potentially harmful data sent from the server. “(Ridden, 2008)

Successful client side attacks are usually aimed at one of two goals; either making the system

part of a botnet or using the compromised system to attack other internal resources.

When considering client side attacks, most people initially think of antivirus. Collecting and

correlating antivirus logs is certainly a good idea. However, as previously stated, today’s attacks

are regularly bypassing antivirus and this technology alone cannot be effective.

Page 158: Siem & log management

158 | P a g e

Antivirus products which include heuristic or anomaly based detection may provide more

valuable data as they can be correlated with other indications of compromise to isolate which

system have a higher probability of being compromised. However, our focus will be on

correlation of log data outside of antivirus alone.

Since the majority of operating systems continue to be Microsoft Windows based, the windows

client logs are good place to start. First, the authentication and account management rules

discussed in section 4 also apply to client side attacks. Creating alerts for similar activities such

as locally created accounts and local group membership changes, especially local administrator’s

group, are important.

Second, monitoring process information and new services can be very helpful in identifying

rogue applications and malware. Windows produces an event log entry as each process starts

(event ID 592/4688) and exists (event ID 593/4689). Once a client side attack has been

identified, process logs can be extremely helpful in determining if any other systems inside the

organization have been compromised in a similar manner.

In addition, process logs may help in determining what an attacker did after the initial attack.

Similarly, Windows will monitor a new service being installed. Event IDs 601 and 4697 will alert

you to the installation of a new service. (Franklin-Smith) Third, Windows scheduled tasks are

another event worth monitoring. Scheduled tasks are logged in Windows using event ID 602

and 4689 “Scheduled Task Created” (Franklin-Smith). Scheduled tasks can be used by attackers

to regularly schedule some kind of attack or malware update. Scheduled tasks should be

monitored on both servers and client workstations. Consider developing an alert, emailed to

systems administrators for server side scheduled tasks. This will allow administrators to help

identify malicious actions.

Fourth, changes to the audit policy or clearing of the event log. Attackers may attempt to hide

their tracks by changing the audit policy to no longer log specific event or clearing the event log

after a compromise. Either of these occurrences should be considered highly suspicious and

investigated. Event ID 612 and 4719 logs changes to the audit policy and event id 517 and 1102

logs the security log being cleared. (Franklin Smith)

Beyond Windows, there are a multitude of options for logging potentially malicious activity and

correlating events. Organizations just need to make sure that whatever solutions they employ

or are evaluating are supported by their SIEM solution. For instance, file integrity monitoring

solutions which can log appropriate changes to the file system, especially new executable files

and registry changes. Both are good indicators of compromise.

Page 159: Siem & log management

159 | P a g e

Host based intrusion detection systems may also have some of these features. Also, consider

third party systems and devices which may help in identifying compromised machines. IDS

systems may have alerts detecting possible infected computers. Commercial options such as

the FireEye Security Appliance or the freely available BotHunter solution are great options to

integrate into your existing log activities. The more sources you can correlate with the more

likely your organization can be successful in detecting these attacks.

The following example in Figure 13 ties many of these concepts together into a real world

example using Qradar. The offense description shown in the first red circle is the name of the

exploit detected. This exploit refers to an obfuscated PDF document. The second red circle

indicates that the system is vulnerable to the exploit. While an obfuscated PDF is not by itself

malicious, Qradar has gathered vulnerability assessment data on the system and knows it’s

running a vulnerable version of Adobe Reader.

Below the blue circle, the top ten events are listed. Based upon the information provided, it

appears that the following the exploit there has been some account logon activity and group

membership changes. Clicking the “Events” button (blue circle) provides a full listing of the

events shown in Figure 14. This list shows various events including the original exploit, several

net.exe commands being issued, a user account being created and a group membership change.

Clicking the “User Account Created” event shows that an account called “haX0r” was created

locally. The “Group Member Added” event shows that the recently created account has been

added to the administrator group.

Page 160: Siem & log management

160 | P a g e

Page 161: Siem & log management

161 | P a g e

Conclusion

Attackers continue to find new methods for penetrating networks and compromising hosts.

Therefore, defenders need to look for indications of compromise from as many sources as

possible. Collecting and analyzing log data across the enterprise can be a challenging endeavor.

However, the wealth of information for intrusion detection analysts is well worth the effort.

SIEM solutions can help intrusion detection by collecting all relevant data in a central location

and providing customizable altering and reporting. In addition, SIEM solutions can provide

significant value by helping to determine whether or not an incident occurred. The challenge for

analysts is creating effective alerts in order to catch today’s sophisticated and well-funded

attackers.

Those new to SIEM should start small, implementing a few of the basic methods, test them and

understand their output before moving to more advanced options. Also, analysts must have the

time and capability to continually review their detection mechanisms and look for new methods

for detecting compromise.

In the end, the goal of “SIEM Based Intrusion Detection” should be to have enough data

available to the analyst to identify a potential compromise and provide as much detail as

possible before beginning formal incident response processes.

Page 162: Siem & log management

162 | P a g e

SIEM Tools Have Blind spots!

The various Log Management and SIEM tools available today have matured to a point that they

can provide effective reports and correlation analysis for just about any activity that appears in

the system logs we get via Applications, Databases, OS and Configuration Management.

But still, as was highlighted in the “2011 Data Breach Investigations Report” (prepared jointly by

Verizon, the U.S. Secret Service and the Dutch High-Tech Crime Unit), less than 1% of all known

data breaches are identified via log analysis. That’s an incredibly low number!

It’s important that we understand why this number remains so low. While privileged identity

management platforms such as Lieberman Software’s Enterprise Random Password Manager,

companies have solved the question of “Who logged on”. But the question of “What they did”

remains too cloudy.

The problem does not come from any inherent problem within the Log Management or SIEM

tools themselves. They do a great job on reporting whatever log input they consume. The real

problem lies in what log data we are feeding them. The fact remains that existing system-

oriented logs have blind spots. There are hundreds of actions which users perform daily that

have major security implications, but unfortunately do not show up on the debug-style logs that

we have access to today. It boils down to one simple truth: If your apps don’t log it, your audit

report won’t show it.

The best way to overcome these blind spots is by adding User Activity Monitoring, such as that

provided by ObserveIT. User Activity Monitoring generates a different kind of log – a video log +

textual video analysis – which details the exact actions that a user performs. This is

fundamentally different than the technical results of what s/he did, which is what most system

logs tell us. It’s like the difference between fingerprints and surveillance video: they are both

valid and accurate, but the video tells so much more than the fingerprints.

Examples of security blind spots are surprisingly common:

Adding an IP address on a Windows server: Consider a situation where a user adds a

new IP address on a Windows server, allowing hackers to bypass firewall settings. With

full security auditing enabled, a total over 11,000 log events are triggered during the 30

seconds it takes to do this action. But within all that ‘noise’ there is nothing that states

what actually took place. Even searching for “IP” or the actual IP address doesn’t find it.

In contrast to this, an audit log that focuses on actual user actions would show that

“john” logged on as “administrator”, and then opened the TCP/IP Address dialog box for

Page 163: Siem & log management

163 | P a g e

editing. What’s more, a video replay of the user session would show exactly what John

did.

Editing a critical configuration file: An admin user might modify a sensitive config file

such as ‘hosts’. This could be done using Notepad, vi or any other text editor. In this

situation, the text editor would not produce any application logs, thus allowing the

change to go undetected. User Activity logs would show precisely that the ‘hosts’ file

was edited, and video replay would show the actual changes occurring within the file.

Running a script on a Linux server: If a user runs a script – let’s call it innocent Script –on

a Linux server, existing system audits will come back with debug data such as process ID

and return value. But they wouldn’t show what commands or system calls are spawned

by this script. Using a User Activity log instead would show the actual screen I/O, and

would also show all those underlying system calls, allowing an auditor to know any

improper actions that this ‘innocent’ script actually performed.

Cloud apps and Desktop software: The issue is not just on network servers. Consider

cloud-based applications such as Salesforce, desktop software such as Excel, or even

bespoke legacy software. None of these applications provide logs that truly show what

the user has done. Some might provide debug-related details, but nothing that would

satisfy a security auditor.

As shown above, security audits that rely on existing system logs have blind spots in them due to the fact that system logs simply do not capture the relevant information needed. It might be possible for a highly-trained security expert to piece together the log entries and determine what actions took place. But it would involve a time-intensive forensic analysis by a scarce and expensive resource. (Do you have highly-trained security experts with nothing better to do than piece together log entries?) User Activity Monitoring augments existing system logs by showing precisely what the user did, thus eliminating security blind spots.

Page 164: Siem & log management

164 | P a g e

Forensics

What does a cyber-attack looks like?

Timeline of a targeted cyber-attack:

Targeted infection – over the course of a campaign, an attacker will make several

attempts to gain access, for instance by tailoring emails or compromising third party

websites known to be used by the target company; through trial and error, they will

test the response of automated security solutions and, ultimately, will evade detection.

Reconnaissance – using the command and control channel, the attacker will seek to

gain higher access privileges within the organization; they will expand their foothold to

develop alternative access routes; and they will identify assets of interest within the IT

estate.

Command and control – with access established from wherever they are in the world,

the attacker is free to move around the network; their activity is carried out ‘under the

radar’, hidden within the noise of the organization’s legitimate network activity, such as

web browsing, and can take place over time periods as long as months or even years.

Historically we think of IT security as a ‘real-time’ task. But in reality, targeted attacks

are not “bang! You're dead” - they're long term infiltrations that can take weeks,

Page 165: Siem & log management

165 | P a g e

months or even years to identify the really valuable information and steal it. We know

customers who have had targeted attackers inside their IT estates for several years.

Given that context, the timescales of detection analysis for this type of threat can be

days or even week. Figure 1. The different stages of a targeted cyber attack

Data collection and staging – often an attacker will cherry pick the most valuable data

from the organization; they may do so by using temporary staging areas that reduce the

likelihood of triggering any simple volumetric alerting systems.

Exfiltration – before transferring the data out of the estate, the attacker will modify or

encrypt it to avoid the transfer of information being detected by traditional DLP

appliances; their activities will once again blend into the background of legitimate

network traffic and will typically go undetected by most traditional security solutions.

Creating Your Own SIEM and Incident Response Toolkit Using Open Source

Tools

Introducing an Incident Response Toolkit

When an information security analyst is investigating incidents, he or she needs to have an

organized set of tools. Similar toolsets are found in other occupations; such as a doctor’s

surgical tray or a mechanic’s hand tools. Though an information security investigator may have

a toolbox with items such as write-blockers, media and antistatic bags, this paper will focus on

software (rather than hardware) tools.

Having the right collection of software tools can help to decrease the time required to identify a

system responsible for any questionable network activity and to isolate it from the network. An

incident response toolkit can automate repetitive tasks, provide useful information to other IT

professionals, and permit them to assist in remediation.

SIEM: The Core of the Toolkit

What is a SIEM?

A Security Information and Event Management (SIEM) solution is the core of an information

security worker’s incident response toolbox. The SIEM began as a product to collect event logs

from various systems into a central server but has grown to also detect and act on certain types

of behavior and to track compliance.

Why Create Your Own SIEM?

If you have talented developers, you may be able to save money by developing a SIEM in-house.

In some situations you are forced to create your own solution because there is no budget

available to purchase a supported product.

Page 166: Siem & log management

166 | P a g e

“There exists today a rich selection of tools that can perform all or some of the features of a full

SIEM at little or no cost, other than the time and energy to learn, install, configure, and

customize them” (Miller, Harris, Harper, Vandyke & Blask, 2011).

Another reason to create your own SIEM is to customize your solution – either because your

infrastructure is very complex or because your needs are different from most commercial SIEM

customers. One example of a unique environment is a university where IT services are often

decentralized.

A third reason to create your own SIEM is that you develop programming and tool set skills in

your staff. Additionally, when you create something yourself, it is much easier to diagnose and

correct problems.

Reasons Not to Create Your Own SIEM

If your homegrown SIEM will be vulnerable to SQL injection and cross-site scripting, then you

are better off not having a SIEM. You must ensure that developers have secure coding training

and experience so that any solutions you implement do not increase your risk. Visit

https://www.owasp.org to learn more about web application security and secure coding

practices. Their development guide, A Guide to Building Secure Web Applications and Web

Services, is a fantastic overview of secure coding principles.

If phone support, documentation, and maintenance from a vendor are important, then you

should not create your own SIEM. If the service goes offline because of a hardware failure or a

software misconfiguration while the developer of your SIEM is out sick (or worse, after he or she

has left your company), that would be a bad time to realize that you do not know anything

about the architecture or how to bring it back online. If you do create your own SIEM, ensure

that more than one person knows how to support and fix bugs, that code is documented, and

that a revision control system is in use.

How to Create a Toolkit and SIEM

This section will outline how a SIEM can be created using open source and other free tools,

drawing on the experience of Indiana University over the last seven years.

Collect Logs

The first step in creating a toolkit is to begin collecting relevant network connection logs and

network flow data. Before you start collecting, consult your legal counsel to determine the

appropriate retention schedule for these logs. You may decide to preserve these logs for as

short as one week or as long as a year or longer – each business has different needs.

Commonly these log types include DHCP leases, VPN sessions, Active Directory domain

controller logs, and various other authentication logs. In addition to logs related to

authentication, collect historical data related to network traffic such as firewall logs, Netflow,

Page 167: Siem & log management

167 | P a g e

and sFlow. These records contain source and destination IP addresses, source and destination

ports, IP protocols, TCP flags, number of bytes/packets, and more.

Initially, you will likely use a tool like syslog to collect these logs and store them in text files.

Syslog was developed by Eric Allman for the sendmail project, and became the default way UNIX

and Linux systems send and store logs (Costales & Allman, 2002). In addition to logging locally

on a server or workstation, syslog is commonly used to send logs over the network to a log

collection server. If Windows systems will be sending or receiving syslogs, then Snare

(http://www.intersectalliance.com/projects/index.html) is an option.

At first, you can use command-line tools like grep (or findstr in Windows) to search the log files

on the server, but you’ll soon want to begin transferring the logs into a database. When using

grep consider taking advantage of the “-i” option to ignore upper/lower case (dvader versus

DVader) and the “-w” option to search based on “word boundaries” (generally when searching

for “192.168.2.3” you do not want to see results for “192.68.2.34”, for example).

Storing Logs in a Database

Storing logs in an indexed database will permit easier and quicker searching of the logs. For

example, you might have a record in a DHCP table that contains the start time that a Media

Access Control (MAC) address was leased an IP address and the time that the lease expired. It

would then be trivial to search for the DHCP leases that any one

MAC address had or the many MAC addresses that were leased a certain IP address. One

method to import the logs into the database is to push the logs received via Syslog into the

database using a scripting language like Perl. See Appendix A for sample code. Alternatively the

service that is the source of the logs may be able to push the logs straight into the database,

depending on the services and the type of database. Some database types to consider would be

MySQL, Oracle, and Microsoft SQL Server. Your database administrators may have a type of

database they prefer, so you may not be given an option. You may want to work with your

database administrators to create indices to speed up queries – at the cost of slowing down

inserts. The fields you will want to index are the ones that are most commonly queried:

probably IP addresses and usernames.

If a scripting language is used to import the logs into a database, they should be normalized at

the same time. An unmodified (“best evidence”) copy of the logs should be retained in text

format in case it is needed for evidence in a trial or hearing. Text normalization can help to

make querying of the logs less confusing and more efficient. The first field type to consider

normalizing is usernames, because it is easier to ensure that they’re all imported lowercase than

to remember to always query the field using a lowercase search parameter. Strings can be

converted to lowercase in SQL by using the LOWER function or like this in Perl using regular

expressions:

Page 168: Siem & log management

168 | P a g e

$string =~ tr/[A-Z]/[a-z]/;

The second field type to consider normalizing is timestamps. It is best to store timestamps in the

Coordinated Universal Time (UTC) standard to avoid confusion about various time zones and

daylight savings time. Be aware that timestamps in logs will often be in a local time zone rather

than UTC. See appendix B for sample Perl code that converts timestamps to the UTC standard.

Potential issues with importing logs into a database are size and scalability. Some logs are

enormous and may be resource intensive to insert, index, and query. Be sure that you consider

and implement a retention policy for logs stored in databases.

Searching the Logs

Once you have your logs stored efficiently in a database, the next step is to create a stored

function that is given an IP address and timestamp and provides details about the system

assigned that IP address. This function will search through the database tables and other

directory information retrievable via lightweight directory access protocol (LDAP). The function

will then return details about the computer and user responsible for the system including:

computer name, MAC address, DHCP lease start/end, full name, email address, office location,

phone number, department/division, etc.

This sample Perl, DBI, and SQL code could be used to determine the MAC address that had the

lease of a particular IP address at the time of an incident. Note the use of bind parameters to

mitigate the threat of SQL Injection.

Using your system of network accountability (possibly a NAC or NAP solution or something likes

NetReg, "http://netreg.sourceforge.net/"

http://netreg.sourceforge.net/), you will discover the user responsible for that MAC Address,

generally by determining a username or a distinct employee ID number. Using that information,

you can search directory information (such as LDAP) to determine values such as full name,

email address, phone number, and department. The following sample Perl code could grab the

department name of a user from a Windows domain controller via secure LDAP: When you have

completed this code, you will be able to take the IP address and timestamp provided by your

intrusion detection system or an external report (e.g., SpamCop or DMCA) and easily track down

the device and user responsible. Keep in mind that timestamps in these reports may not always

be in the same time zone as your logs. Appendix B provides sample Perl code to convert

timestamps to and from the UTC standard.

You may eventually use these stored functions via a web service so you should be sure to

sanitize any input received and use bind parameters in your SQL statements. See the OWASP

SQL Injection Cheat Sheet for more details, including examples for Oracle, MySQL, and SQL

Server: https://www.owasp.org/index.php/SQL_Injection_Prevention_Cheat_Sheet

Page 169: Siem & log management

169 | P a g e

Quarantine and Blocking Tools

Once the ability to automate identification has been improved, the next step is to make it easier

to place network blocks. For example, instead of manually using the Active Directory Users and

Computers snap-in of the Microsoft Management Console (MMC) to add usernames to a group

of users denied from using the VPN service, create a reusable script that will perform the same

function using LDAP. To make the best use of your incident response staff’s time, create scripts

to automate as many tasks as possible.

These scripts can be grouped together to automate identification, notification, and quarantine.

Whenever a network block is placed or a notification is sent, record that action in a database.

You’ll be able to use this data to track recidivism, identify trends, and provide metrics. Again,

consider data retention policies for these records.

Notification

When a device or user needs to be blocked from the network, notification should be sent to the

user, the IT staff in that department, or both parties. Instead of composing these notifications

manually, prepare canned messages for several notification types. These messages can contain

placeholders that your code replaces with details pertinent to the incident at hand. In addition

to explaining what activity was detected (the reason for the block); these messages should

provide remediation information and instructions. In some cases the notification may be as

simple as “contact the helpdesk”, but in other situations they may provide detailed instructions

to backup data, wipe the hard drive and reinstall the operating system.

Most issue tracking systems have functionality built in to support pre-defined frequently used

messages. The open source Request Tracker (RT) calls this feature RTFM (RT FAQ Manager).

Issue tracking systems will be discussed further in the Incident Metrics section of this document.

Provide Lookup Tools for the Helpdesk

In order to help users quickly remediate system compromises or network blocks, the helpdesk

will benefit from having information about the detected event that caused the quarantine of the

system. Depending on your environment and the amount of trust you have in your helpdesk,

you may provide a minimal amount of information (e.g. the block reason being “FTPd on non-

standard port”) or as much as the packet capture and signature details from your intrusion

detection system.

Incident Metrics

Incident metrics could be stored in your SIEM or in a separate tracking system. A tracking

system (such as Best Practical’s open source Request Tracker: http://bestpractical.com/rt/) can

be used to document incidents and related correspondence and metrics. In most issue tracking

systems, tickets can be linked to one another and custom fields can be created and used.

Page 170: Siem & log management

170 | P a g e

In addition to the obvious benefits of tracking correspondence and actions taken, these systems

provide the ability to track related events and trends. A repeat offender, for example, may need

extra attention or a referral to a disciplinary office.

Verizon Enterprise Risk and Incident Sharing (VERIS) Framework One option for tracking incident

metrics is the Verizon Enterprise Risk and Incident Sharing (VERIS) framework (Verizon, 2010).

Verizon developed a set of metrics for the purpose of “capturing key details surrounding a

breach” based on these four categories:

Agent (whose actions affected the asset)

Action (what actions affected the asset)

Asset (what assets were affected)

Attribute (how the asset was affected)

This framework makes it incredibly easy to collect very useful metrics. For example, you could

display a chart that compared external hacking incidents where the confidentiality of a server

was compromised against internal misuse incidents where the usability of the network was

affected. Whether or not the VERIS framework is used, incidents should be categorized and

metrics reviewed at least quarterly. In addition to highlighting current threats and challenges,

metrics can improve resource allocation and point out tasks whose automation via scripts would

improve efficiency.

Charts

An easy way to visualize metrics is by creating charts. Charts can benefit in many ways. If

displayed on a monitor in a Security Operations Center (SOC), charts make it very easy for an

information security analyst to notice a spike or a change in a trend, such as a large number of

authentication failures. Over a longer period (e.g. months or quarters) charts can be used in a

quarterly report to management to demonstrate worth or to warn about new trends.

Three easy to use chart packages are: pChart, Google Image Charts, and Google Chart Tools.

PChart uses PHP and is open source. It is a great option if PHP is your favorite programming

language. Google Image Charts creates live image files on a Google server in response to the

data being provided as parameters in the URL. This is probably the easiest way to start creating

charts, but has limitations on image size and URL length.

Google Chart Tools, the search giant’s newer solution, works with any programming language

and uses JavaScript and HTML to display the data. Its images are a little fancier (mouse-over

interactions) than the Google Image Charts and do not have limitations on image size or number

of data points.

Determine which metrics are most useful to chart (e.g. number of incidents, incident type or

severity). Appendix D contains Perl code that creates Line Charts using Google Chart Tools.

Page 171: Siem & log management

171 | P a g e

Consider using charts in quarterly reports or cycling on a display in the Information Security

Operations Center.

Keep Expanding the Toolkit

In addition to the tools mentioned above, there are likely some tools that could really benefit

the particular environment you work in. Note the procedures your incident response staff

routinely undertake and explore how those tasks could be improved or automated by adding

tools or scripts to your virtual toolbox. A few samples are listed here:

7. Event Correlation and Information Sharing

Event correlation is the process of connecting certain events from a large

number of events. A good example of event correlation is the detection of a

brute force authentication attack. Once a SIEM is brought up and information

about security events is being stored in a database that is easy to query, then it

is time to begin performing event correlation. Event correlation can be as simple

as keeping track of a list of IP addresses known to be used by miscreants and

then alerting when activity is detected from those IP addresses.

The open source Simple Event Correlator (SEC,

http://simpleevcorr.sourceforge.net/) is a simple little tool that reads from log

files and correlates events based on a configuration file. SEC can write to a file

or execute a program in response to events triggering a rule.

Many issue tracking systems come with an API or another method of interacting

with it from another system. RequestTracker, for example, comes with a

RESTful web service which your event correlation solution can use to create new

tickets.

In addition to recording information about threats and attacks, there are good

reasons to share that information with trusted partners. Your information may

help them protect their network and their information may help you protect

yours. Information Sharing and Analysis Centers (ISAC) like the IT-ISAC and the

REN-ISAC exist to promote information sharing about threats. The information

sharing is protected by nondisclosure agreement. Information sharing has been

so useful that these ISACs are developing infrastructures to aid the information

sharing process. The REN-ISAC’s Security Event System is one such example:

http://www.ren-isac.net/ses/

The FBI’s InfraGard program is a system that promotes the sharing of

information about threats to the national infrastructure. There is a strong IT

Page 172: Siem & log management

172 | P a g e

security presence in the InfraGard program but it also covers sectors such as

agriculture, banking, chemical, energy, and transportation.

8. Null Route Injection

Using BGP, inject null routes into routers for IP addresses (internal or external)

that are misbehaving. This method is preferable to blocking a device at the

switch port or firewall because blocks take effect immediately. Network

engineers can set up a null route injection API, ideal for security analysts who

have the authorization to quarantine devices from the network but do not

maintain the network hardware.

Injecting null routes can block a device using DHCP sooner than waiting for the

device to be denied a lease renewal. In addition to blocking the MAC address

from DHCP renewal, inject a null route for the duration of the current lease.

9. WHOIS Notification

When an external party is attacking devices on your network, that activity

should usually be reported to the service provider responsible for the source IP

address. In order to determine who should be alerted, the WHOIS protocol is

used. One option is to point your web browser to http://whois.arin.net/ui and

manually look up the IP address. The American Registry for Internet Numbers

(ARIN), however, will only provide results if the IP address is in the United States

or Canada.

There are separate Regional Internet Registries (RIRs) for different areas of the

world: LACNIC (South America, Mexico, and a few surrounding countries), APNIC

(China, India, Japan, Australia and others), AFRINIC (the African continent), and

RIPE (Europe, Russia, the Middle East and others).

Instead of using a web browser to manually search each of the registries until

you get a result, code can be written to handle this for you. The Perl code in

appendix C will return an email address from the WHOIS data when given an IP

address.

10. Password/Passphrase Scramble

In the event that an account is compromised and under the control of a

miscreant, having the ability to disable access to an account can be critical.

Another option besides disabling the account is to scramble the password or

passphrase. One reason the password scramble may be preferable is that email

to the disabled account would bounce but would reach an account with a

scrambled password just fine. Another reason is that the helpdesk likely has

access to set a new password but not to enable accounts. Be sure that you

Page 173: Siem & log management

173 | P a g e

disable any self-service password reset functionality or else the miscreant may

continue to abuse the account.

11. Self-Service Remediation

If your users can be trusted enough, consider creating a web interface where

they can view information about blocks or quarantines placed against their

devices or accounts. After reviewing remediation information, they can assert

they performed the required steps and initiate a procedure to unblock the

device or account. Of course steps should be taken to identify abuse of this

trust and to blacklist abusers. Web pages that accept user input need to be

particularly thorough in sanitizing input to prevent attacks such as SQL injection

and cross-site scripting.

Conclusion

Creating your own SIEM can be a fantastic solution to improving your incident response

capability. This is particularly true if you do not have a budget for a commercial option or if a

custom SIEM is preferred because of a complex environment.

Data enrichment for simpler and faster forensic analysis 1. Detecting an Intrusion, a breach

2. Detecting Rootkits (Loading/Installation of Drivers)

3. Detecting External/Internal Brute Force attacks

Inspecting Past Incidents:

In the case of misfeasors, a network-based approach to detect violations of need-to-know

policies, or a top-down structured analysis of insider actions from high-level goals, are more

appropriate. They allow the identification of unauthorized activities such as anomalous

downloads, suspicious installation of software and retrieval of documents outside some

constraints.

Another approach to detect misfeasors relies on a bottom-up approach based on the correlation

of evidence collected from several sensors to infer malicious intents from insiders.

Two challenges here are to log with sufficient details about the events taking place, and to be

able to relate actions logged to unique individuals. In fact, Verizon reports that only 19%

of the analyzed organizations that had data breaches in 2008 had a unique ID (digital

identification such as login) assigned to each person with computer access to their assets; in

81% of the cases the organization used shared accounts for system access.

Page 174: Siem & log management

174 | P a g e

Reporting 1. Reports for network security

2. Network optimization

3. Reports for regulatory compliance purposes

4. Report templates are provided for ISO, COBIT, GLB, HIPAA, PCI, and Sarbanes Oxley

Open Source Solutions

SPLUNK

Security

Investigate security threats faster reducing risk and the attack window by searching and analyzing all your logs, audit trails and any other security relevant data across your entire IT infrastructure from one place.

Reduce operational complexity and cost by performing log management using the same infrastructure as change monitoring, operational monitoring and security without the need for additional agents.

Page 175: Siem & log management

175 | P a g e

Understand your security posture by generating comprehensive reports in seconds across all your logs, audit trails and other security relevant data.

Compliance

Meet requirements to capture any and all logs, even application logs, in real time.

Provide clear chain-of-evidence, even with application logs.

Pass compliance audits with minimal effort by quickly generating standard and ad-hoc reports across all logs, audit trails and other machine data from one place.

Network Operations

Improve your Mean Time to Investigate and Resolve issues (MTTI/MTTR) by searching and analyzing across your log files, including your application logs, audit trails and other machine data to efficiently troubleshoot problems.

Reduce operational complexity and cost by performing log management using the same infrastructure as change monitoring, operational monitoring and security without the need for additional agents.

Perform log analysis across system boundaries by centralizing all your logs and other machine data and provide the ability to rapidly search, alert and report on this data.

Page 176: Siem & log management

176 | P a g e

Log Management Using Splunk

Splunk indexes logs in any format from any data source, in real time. Unlike syslog or other network-based log appliances, you can even capture new events in application log files as they happen. You can define how long to keep data and activate Splunk's data signing to fulfill compliance-mandated log retention controls.

All of your IT staff - sysadmins, security analysts, developers and auditors - can search this data to troubleshoot problems and investigate security incidents from the application tier down to the network.

Users with different expertise can add their knowledge by classifying and tagging events such as specific error codes, identifying and naming fields such as IP addresses or transaction IDs, breaking down silos of knowledge.

Users become proactive by saving and scheduling searches to monitor and alert on specific events, patterns and thresholds.

Users can also set up their own reports and dashboards to summarize logged activity, such as firewall traffic reports, errors and warnings by component, and user login activity. As users become accustomed to searching logs with Splunk, they'll start reviewing logs routinely and noticing anomalies and trends that they wouldn't pick up with traditional monitoring.

Security’s New Challenges

The role of IT security is expanding, driven by new and evolving security use cases with risk

implications for the business. Kevin Mandia of Mandiant estimates that there are “thousands of

companies that have active APT (Advanced Persistent Threat) malware.” This malware is left

behind through targeted attacks from persistent adversaries. The current conventional

approach provides key reasons for the proliferation of security threats:

In many organizations, perimeter defenses remain the primary focus for the security

team;

Signature and rule-based systems used by security teams are not able to keep up with

the flood of new attacks

Page 177: Siem & log management

177 | P a g e

SIEMs primarily set to collect data from signature-based or rule-based systems

Security incidents identified in the absence of contextual data from IT operations

Canned reports have given the impression that critical thinking and analysis are not

necessary

Systems that lack the scale and analytics needed to map potential threats against large

data sets over long periods of time

This conventional security mindset and approach doesn’t cover “unknown threats” from new

more sophisticated malware that:

Leverages data from social media sites supporting social engineering

Obtains entry into the network via end users and end points

Evades detection (redefined low and slow techniques)

Uses unique attack patterns that allows the malware to be disguised as a “normal”

application

With a much broader set of possible attack vectors and more innovative and targeted attacks

coming from persistent adversaries, the amount and types of data analyzed must be expanded.

A security intelligence approach is one that watches for known threats as reported by signature

and rule based systems and watches for unknown threats using extensive analytics on the

behaviors of system users. Normal user activities need to be monitored to understand time-

based patterns of access, usage, and location.

Splunk: Big Data and Security Intelligence

An approach to security that applies pattern analysis to user activities around the most sensitive

business data—aligns with business risk. The more operational and security data collected the

better the insight into business risk. Collecting and correlating data from the widest possible

sources is the first step to gaining visibility of your infrastructure and improving your security

posture.

Behavior-based analysis is the next step in a security intelligence approach. In cooperation with

the business, identify your most important digital assets. These could be data stores of

personally identifiable information (PII), intellectual property, internal emails, or other

information retained on systems that are of high value to attackers. The final step is to apply an

“actor-based” approach to understand the modus operandi and methods of potential

adversaries. Security intelligence analysts

Need to routinely ask:

Who would I target to access data and systems that contain the highest value of

collected data?

What methods could I use to facilitate the stealthy spread of malware?

Page 178: Siem & log management

178 | P a g e

How can I make sure my command-and-control communications are not detected?

What changes should I make to the host to make sure my malware stays resident in the

enterprise?

What would abnormalities in my machine data look like in the event of an attempted

email exfiltration or a transfer of PII outside the company?

What host network services should be monitored for changes?

What malware behaviors can be differentiated in log data based on time-of-day, length

of time, and location of origin?

A behavior-based approach to advanced persistent threats using pattern analysis enables an

advanced approach to threat detection, as recommended by the Security for Business

Innovation Council.

It is important to note that having a big-data approach for unknown threats doesn’t supplant

the traditional approach for monitoring known threats. Watching for known threats using

elements of a conventional approach to security is still a requirement.

Page 179: Siem & log management

179 | P a g e

This timeline can be used to focus on the precise moment in time a security event occurred. Any

search result can be turned into a report for distribution. This is especially useful for ad-hoc

queries in support of compliance initiatives such as PCI, SOX or HIPAA.

Real-time Forensics Operationalized

Once a forensic investigation is complete, Splunk searches can be saved and monitored in real

time. Real-time alerts can be routed to the appropriate security team members for follow-up.

Correlation across system data by vendor or data type is supported in Splunk’s easy-to-use

search language.

Page 180: Siem & log management

180 | P a g e

Splunk’s search language supports correlations that can generate alerts based on a combination

of specific conditions, patterns in system data or when a specific threshold is reached.

Splunk lets you see real-time information from security and network devices, operating systems,

databases and applications, on one timeline enabling security teams to quickly detect and

understand the end-to-end implications of a security event. Splunk watches for hard-to-detect

patterns of malicious activity in machine data that traditional security systems may not register.

This approach can also provide the building blocks for a variety of supported fraud and theft

detection use cases.

Metrics and Operational Visibility

Understanding business risk requires a metrics-based approach to measure effectiveness over

time. Splunk’s built-in search language contains the commands needed to express search results

as tables, graphics, and timelines on security dashboards.

Key performance indicators (KPIs) can be monitored by business unit, compliance type, location

and more. Real-time Correlation and Alerting Correlation of information from different data sets

can reduce false-positives and provide additional insight and context. For long-term

correlations, Splunk can write individual system events to internal files also monitored by Splunk

and aged out over time.

If the right group of events writes to the file, before it is aged out, the correlation is completed

and an alert is issued. Splunk supports a rich set of alert creation criteria providing rule based

alert suppression and thresholds.

Reaching For Security Intelligence

Security Intelligence is the process of collecting information and applying the knowledge,

creativity, and skill of the security team and deriving business value. Most organizations now

have to be concerned about two types of threats. 'Known threats' - the ones reported to us by

signature and rule based systems such as anti-virus, IDS/IPS, firewalls, and security information

and event management systems (SIEM). The other kind of threat is called the 'unknown threat.'

Monitoring Unknown Threats

Unknown threats comprise abnormal patterns in 'normal' IT data. Normal IT data is generated

by the user of enabler services that humans use every day. This data is the reflection of human-

to-machine and machine-to-machine interactions and activities. Our normal activities include

badging into the building, surfing the web, getting an IP address from a DHCP server, using DNS,

using a VPN, using email, and accessing enterprise applications and company information. It is in

these normal activities where attackers want to hide their activities.

Patterns of human activity seen in this data follow business patterns and happen within

parameters of time and location. Splunk can be set to monitor for thresholds and outliers in this

Page 181: Siem & log management

181 | P a g e

data that can reveal stealthy malware activities. Splunk's analytics language supports threat

scenario based thinking that allows the security professional to ask any question of the data --

ultimately searching for 'unknown threats.' Employing this strategy monitoring the enterprise's

most critical data assets is a risk based approach aligned with business goals and objectives.

Supporting the Security Intelligence Analyst

Security Intelligence Solutions move beyond traditional SIEM use cases of providing canned

reports, dashboards, and monitoring for known threats to support a Security Intelligence

analyst's needs for data exploration to find abnormal activity patterns in massive amounts of

normal data. Splunk supports the newest role in security -- the Security Intelligence Analyst

This approach supports the newest versions of regulatory requirements and frameworks such as

FFIEC, HIPAA, and FISMA that emphasize data protection and privacy. The SEC's recent guidance

that public companies discuss their cyber security risks in their 10-K statements specifically

mentions, "Risks related to cyber incidents that may remain undetected for an extended period"

as a risk to be discussed. Adopting a Security Intelligence approach when looking for unknown

threats in 'normal' IT data is a mitigation strategy that can be mentioned in the 10-K.

Protect Your IT Infrastructure from Known and Unknown Threats

The nature of security threats has changed. The newest security threats are not detectable by

rule and signature based systems or traditional Security Information and Event Management

systems (SIEM). SIEMs uses a 'data-reduction' strategy of collection time normalization, data

storage with limited search capabilities, and uses a subset of the collected data to monitor for

around 200+ specific conditions (rules) that indicate the possibility of a 'known threat'.

The newest threats are often 'unknown'. These are from advanced persistent adversaries that

use social engineering to get the user to bring malware into the enterprise for them--

circumventing perimeter defenses. Once inside the enterprise the malware can hide its activities

behind 'normal' credentialed user activities. This malware acts in the same way as a malicious

insider--siphoning off large amounts of valuable company data for illegal gain.

Splunk and the Unknown Threat

Splunk's powerful big-data analytics engine can accept IT risk scenario style thinking and support

a security intelligence approach to discovering unknown threats. By placing large amounts of

normal machine-to-machine and human-to-machine generated data into Splunk and applying

analytics, users can separate normal behaviors from ones that may be malicious. These

scenarios may be based on the time a user activity occurred, how long the activity took, how

often a user accesses a system containing important data assets, the location from which the

activity was initiated, or any combination of conditions. Splunk facilitates a 'data-inclusion'

strategy. Using Splunk to perform this analysis coupled with a thorough knowledge of the

modus operandi of the attacker is the key to finding and identifying "unknown threats."

Page 182: Siem & log management

182 | P a g e

The Splunk App for Enterprise Security

The Splunk App for Enterprise Security provides a window into 'known' threat data collected

from the traditional components of the security architecture. The Enterprise Security App

features a metrics and domain based approach to consolidating and monitoring access,

endpoint and network protection products while giving the user form searches and the ability to

drill-down directly into the data. Once in the raw data, the App provides cross data-type

investigation workflows and a robust incident management system to classify, prioritize, assign,

and document security incidents. The App is flexible enough to allow the user to add their own

real-time correlation searches to those provided and additional dashboards tailoring the App to

the needs of security and business users. At any point in data exploration, the App can be used

as a jumping off point for investigations of both known and unknown threats.

Watching for ‘Unknown Threats’ from Advanced Persistent Attackers

Advanced Persistent Threats (APT)--organized attacks from persistent adversaries and the

malware they leave behind--is a growing problem for many enterprises in many industry

verticals and governmental agencies. A quick review of the headlines from multiple discoveries

from the beginning of 2010 when Operation Aurora was made public by Google give us some

clues about which industries and companies that are targets:

Companies involved in highly technical or classified work

Governmental agencies that have data stores containing information about domestic

and foreign policy

Companies involved with cutting edge consumer product work where innovation times

are very short or have proprietary information that allow them to maintain a

competitive edge against all competitors

Communication companies with data stores containing communications of persons of

interest to others (can be other governments or other interested parties)

The list above should not be considered a definitive as it is influenced by the motivations and

imagination of the attacker.

The inability to prevent these attacks is not the fault of currently available security systems or

thinking by security teams. These attacks target individuals in a company that are profiled by the

attacker as having the potential to give the attacker highly valuable information that can be

used for nefarious purposes. Mandiant, a leading information security company with

commercial and Federal clients tells us that there are thousands of companies actively

compromised right now.

The malware left behind is meant to be stealthy and persistent often looking like a normal

service or application that starts up at boot time to remain persistent. It looks to spread across

systems so that if an instance is found and removed, an attacker can perform their own post-

Page 183: Siem & log management

183 | P a g e

mortem, activate another instance of the malware, and change the way the malware works to

continue to stay resident in the enterprise continuing to collect data. The question becomes,

"How can I efficiently review terabytes of 'normal' data from machines and users looking for

patterns that can mean a policy violation or malicious activity?"

Splunk: A big data solution – finding the ‘Unknown Threat’

Discovery of malware left behind by determined persistent and highly skilled attackers is not a

possibility with signature and rule based systems reporting their data to a SIEM. These systems

look for abnormal behaviors and covert attacks--not anomalies in normal behavior. Finding

malware designed to hide in normal activity requires a system that can ingest massive amounts

of seemingly normal system data that when taken together in context through the lens of a

robust data analytics can point out the differences between normal machine and human

behaviors vs. malware.

Splunk can collect and index any data without regard to format or size and perform automated

searches across petabytes of data. Splunk's verbose analytics command language facilitates a

Security Intelligence approach allowing the analyst to ask threat scenario based questions of

your data aligned with business risk or 'thinking like a criminal'. This approach lets you find

'known threats' as reported into Splunk by signature and rule based systems and 'unknown

threats' represented as data patterns in normal activities.

Compliance Solutions

Any Data Any Compliance

Compliance mandates such as PCI, HIPAA and FISMA require businesses to protect, track, and

control access to and usage of sensitive information. Each requirement has its own set of

complicated, costly, and time consuming demands. Addressing these strains IT resources and

creates redundant processes and expenditures within an organization.

For example, compliance audits result in a lot of manual data requests, creating a huge

distraction for IT. Companies are required to retain data for long periods, driving the purchase of

expensive log management software, appliances and related storage, just to comply in this one

area, but with little operational value. Compliance requirements to monitor logs and changes

drive costly investments in SIEM, change monitoring and other technologies to implement

specific monitoring and controls. Compliance also impacts day-to-day operations with

segregation of duties keeping developers and operational teams off production systems, which

in turn affects troubleshooting and system availability.

Cost Effective, Repeatable Compliance Solutions

Splunk solves all of these challenges in one place. Splunk indexes your machine data in real-

time, allowing you to search, alert and report on all of it. You can generate reports in seconds

while at the same time meeting requirements to collect and retain specific audit trails. Splunk's

Page 184: Siem & log management

184 | P a g e

ability to also do both security and change monitoring satisfies requirements to meet these

controls. It even allows developers to safely access production data, without distracting

operations teams or causing compliance violations or exceptions.

The hundreds of customers using Splunk for compliance routinely comment on their ability to

quickly close compliance gaps, enable greater levels of automation to meet compliance

mandates, and demonstrate compliance across all requirements from a single system.

Using Splunk for compliance helps satisfy the requirements of multiple mandates in a single system.

You can monitor access to and usage of all your sensitive data and quickly generate reports to

demonstrate compliance with the given regulation more simply and cost-effectively than before.

FISMA - Securely collect, index and store all of your log and machine data along with

audit trails to meet the critical requirements of regulations and standards affecting

United States federal agencies and contractors.

PCI - Meet requirements for audit trail collection, retention and review. Generate reports

in seconds to prove compliance with any control. Comply with explicit data control

requirements across your infrastructure, including file integrity monitoring.

SEC - Use Splunk and pattern-based analysis as part of a risk mitigation strategy for "Risks

related to cyber incidents that may remain undetected for an extended period," as

suggested by the SEC for 10-K risk-factors guidance.

FISMA Compliance

The Federal Information Security Management Act of 2002 (FISMA) and the associated NIST

standards are driving all federal agencies to adopt a security risk management approach.

Specific IT controls from NIST's 800-53 become the IT controls grail for Federal Agencies and

NIST's 800-37 document drives a risk-based approach to prioritization of work to be performed

modeled on the principals of confidentiality, integrity and availability (CIA). The Office of

Management and Budget (OMB) is charged with overseeing FISMA compliance using an audit

process that prescribed grades to agencies indicating their level of FISMA compliance.

FISMA compliance and the underlying NIST documentation required each agency to:

Inventory agency information systems

Categorize information systems

Define minimum security controls

Establish an on-going risk assessment process

Develop system security plans (SSP) for each information system

Conduct regular certification and accreditation of the systems

Provide on-going monitoring of information systems

Page 185: Siem & log management

185 | P a g e

The goal of FISMA is to verify through annual audit that agencies can respond to changes in the

IT architecture both foreseen and unforeseen in an efficient, consistent, and prioritized manner

based on asset information and information risk.

The FISMA Compliance Challenge and Coming Changes

Federal agencies have come a long way since the 'D' and 'F' grades given to agencies when

FISMA was passed in 2002. The 2009 OMB report indicates, all agencies continue to show

performance improvements with most agency audits in the ninetieth percentile for compliance.

Yet FISMA compliance is still checklist driven and infections continue to pop-up from time to

time. According to Congresswoman Diane Watson, "Congress and other government agencies

are now under a cyber-attack an average of 1.8 billion times a month."

FISMA will shift agencies to real-time threat monitoring of the federal IT infrastructure.

Continuous monitoring has already been implemented at the Department of State, which

according to a recent Government Computer News (GCN) article has, "...significantly improved

its security posture while lowering the cost..."

Splunk Provides Continues Monitoring of FISMA Risk-Based Controls

Splunk can monitor data-streams in real-time and search terabytes of historical data to

continuously monitor data coming in ASCII text from any data source. Splunk can

monitor changes to files that can indicate system 'configuration drift' against a baseline.

Splunk's search language lets you search for what you're looking for across terabytes of

data and includes statistical functions that allow you to create statistical averages, look

for outliers, and continuously monitor and measure your state of compliance.

Splunk's ability to accept and store knowledge from users as metadata tags means that

data and system classifications can be used to drive reports and dashboards supporting

metrics for KPIs relating to 800-53 v3 controls.

Splunk's 'look-up' feature allows you to pull data from an asset management database

that may contain contextual information about hosts such as security classifications,

system owner information, and up-time requirements. Part or all of this information in

reports and dashboards presented to users.

Splunk can be tailored to scale, while supporting role-based access to dashboards,

reports, and allowing direct drill-down into the supporting data. Dashboards and

visualizations update in real-time making Splunk ideal for NOC or SOC operations.

With Splunk, agencies can meaningfully operationalize FISMA compliance by continuously

monitoring security of all data generated by the IT architecture with complete situational

awareness in real time.

Page 186: Siem & log management

186 | P a g e

PCI Compliance

Log Data Suspect, Poor Visibility into System Access

Collecting and retaining audit trails for at least a year is among the most daunting requirements

for PCI compliance. It's difficult to access, analyze and manage all the data. Legacy solutions

demand constant maintenance and are open to question by auditors. Implementing adequate

integrity controls is a significant technical challenge.

PCI Compliance without Disrupting Ongoing Operations

With Splunk you can securely collect all PCI-relevant data and then search, alert and report on it

to address the complete range of PCI related issues and requirements. Generate reports in

seconds to prove compliance with any PCI control, from password policy to firewall

configuration. Comply with PCI's explicit data control requirements including log collection,

review and retention requirements across your entire infrastructure as well as file integrity

monitoring.

Benefits

Rapid compliance with PCI requirements for audit trail collection, retention and review

Meet requirements for file integrity monitoring

Prove compliance with all PCI controls

Answer any auditor data request in seconds

Increase availability by overcoming PCI-mandated access restrictions

Control access to sensitive data

Use Splunk for:

Secure Central Log Collection (Requirement 10.5)

Splunk provides the most comprehensive solution for PCI's explicit requirement for secure log

collection.

Daily Log Review (Requirement 10.6)

Makes the chore of daily log review easy with fast search, visualization and tagging and track

your daily review history for your auditors.

Page 187: Siem & log management

187 | P a g e

Secure Remote Access (Requirement 7.1)

Splunk eliminates the hidden toll PCI takes on availability by providing secure, remote access to

all machine data despite strict production controls.

Audit Trail Retention (Requirement 10.7)

Keep the cost and hassle of retaining logs for PCI under control. Splunk stores your data in an

efficient, compressed format and lets you control data retention by age.

File Integrity Monitoring (Requirements 10.2.2, 11.5, 10.5.5)

you don't need to buy one tool for configuration auditing and another for log management.

Capture and index changed files for audit trails and administrative actions.

PCI Control Reporting (All requirements)

Splunk not only gives you compliance with key PCI requirements, but it lets you demonstrate

compliance quickly and easily across all PCI-mandated controls.

Splunk PCI Compliance Suite

Splunk PCI Compliance Suite covers all the relevant PCI DSS requirements including live controls

monitoring, process workflow, checklists and reporting. Co-developed with our

partner Glasshouse Technologies, a global provider of data center infrastructure consulting services,

the Splunk PCI Compliance Suite provides a broader and deeper view of your compliance

posture across all in-scope data sources including complex application logs and configurations.

Collect and retain all your log and configuration data even if your PCI domains are generating

terabytes every day. Efficient workflows for audit-trail review and built in change monitoring

eliminates the need for additional technologies and point product purchases to pass your PCI

DSS audit. Eliminate unnecessary developer and IT access to production systems keeping PCI

DSS exceptions to a minimum. PCI uses the Splunk Common Information Model (SCIM) to

integrate with other Splunk Solution Suites and external systems. And it is backed by Splunk

Professional Services delivery. Contact us and we'll show you how Splunk PCI Compliance Suite

can help you meet your compliance goals.

Page 188: Siem & log management

188 | P a g e

Sec Compliance

Cyber Risk and Advanced Persistent Threats

All public companies must file an annual report as required by the Securities and Exchange

Commission (SEC) giving a comprehensive summary of a company's performance. This

document, called the 10-K includes information such as company history, organizational

structure, executive compensation, equity, subsidiaries, and audited financial statements,

among other information. Investors who may decide to purchase equity in the company as stock

use this information.

More specifically, the 10-K contains a section 1a called Risk Factors. Here, the company lays out

anything that could go wrong, likely external effects, possible future failures to meet obligations,

and other risks disclosed to adequately warn current and potential investors. Examples of risks

identified might include: disruption of capital markets, natural disasters, legislative and

regulatory actions, or other macro-economic conditions.

Understanding Cyber Business Risks in the 10-K

On October 13, 2011, In recognition of the fact that nearly all public companies interact with

customers or suppliers on-line, store digital documents and rely heavily on information

technology, the SEC issued new guidance for completing the risk factors section of the 10-K.

"For a number of years, registrants have migrated toward increasing dependence on digital

technologies to conduct their operations. As this dependence has increased, the risks to

registrants associated with cyber security have also increased, resulting in more frequent and

severe cyber incidents. As a result, we determined that it would be beneficial to provide guidance

that assists registrants in assessing what, if any, disclosures should be provided about cyber

security matters in light of each registrant's specific facts and circumstances."

The SEC also mentions that cyber-attacks can be in the form of a denial-of-service or may be

carried out through highly sophisticated efforts to electronically circumvent network security

using social engineering to gain access sensitive data.

'Unknown Threats' and Mitigation Strategies

Among the examples of appropriate disclosures for SEC compliance, the SEC lists "Risks related

to cyber incidents that may remain undetected for an extended period." This is a direct

reference to 'unknown threats' from malicious insiders or malware left behind by advanced

persistent attackers.

In the risk factors section of the 10-K, many companies also list ways that particular risks can be

mitigated. Using Splunk to monitor 'weak-signals' in massive amounts normal user data for

abnormal events is a mitigation strategy for unknown threats from malware that could go

undiscovered for long periods of time.

Page 189: Siem & log management

189 | P a g e

HIPAA

Splunk: Big-data for Healthcare

Healthcare data is generated by numerous systems and in a wide variety of formats--syslog,

custom application logs, XML, HL7 and myriad other formats. Add to this business vertical an IT

vendor technology landscape that is influenced by mergers, acquisitions and disparate and

conflicting development processes. It's no surprise that most healthcare applications do not

conform to a single data format. With so many off-the-shelf and customer applications providing

information in unique formats to contend with, managing this data and deriving value from it

represents an ongoing struggle for healthcare industry IT professionals.

Most healthcare providers are concerned about three things:

Profitability and Efficiency - making sure service is optimized for every dollar spent;

Better Patient Outcomes - improving the quality of service delivered to the patient and;

HIPAA Compliance - making sure we protect patient (and employee data) while giving

access to the right persons at the right times to do their jobs.

Most healthcare payers (insurance companies) are concerned about three things:

Profitability and Efficiency - making sure service is optimized for every dollar spent;

Fraud - understand the difference between billing errors and organized schemes to

defraud and;

HIPAA Compliance - making sure that people see only the data they need to do their

jobs.

It's no accident that these concerns look similar. The answer to these concerns really is - 'it's in

the data.' Seeking patterns in large amount of data--terabytes and petabytes--collected from a

wide variety of systems correlated and seen in the context of time and place can provide

answers to the most common and pressing questions asked by these two sides of the healthcare

coin.

Splunk: An agile big-data solution

Some of the many business questions Splunk is able to answer are:

Are the third shift nurses more efficient than first shift when administering prescribed

treatments?

How much drug diversion is taking place in the hospital?

Are off-shift hospital personnel viewing patient data records and what's the potential

fine amount to the hospital?

Are there multiple claims from the same doctor for reimbursement for services from

many difference cities for more patients than humanly possible?

Page 190: Siem & log management

190 | P a g e

What are the anomalies in the numbers of specific kinds of treatments provided against

a rolling 30 day average from a particular location?

Splunk can be used to measure the amount of time spent with a patient at each phase of service

to support a Time-Driven-Activity-Based-Costing model in lieu of a payment for services model.

Splunk was founded specifically to focus on the challenges and opportunity of effectively

managing massive amounts of machine data. As well as context from other databases. Over

3,700 customers in 75+ countries use Splunk to harness the power of their machine data for

application management, IT operations, cyber-security, compliance, web intelligence, and

business analytics. With Splunk they achieve new levels of visibility and insights that benefit IT

and the business.

Splunk can collect and index any data without regard to format and perform Google-like

searches across petabytes of data. Splunk's verbose flexible analytics command language allows

you to ask questions of your data that when translated into automated search queries can

answer specific business questions.

Splunk Enterprise

Page 191: Siem & log management

191 | P a g e

Product Overview

Splunk is the engine for machine data. It collects indexes and harnesses the machine data

generated by all your IT systems and infrastructure—physical, virtual and in the cloud.

Machine data is one of the fastest growing, most complex segments of data in your

organization. It’s also one of the most valuable, containing a definitive record of user

transactions, customer behavior, machine behavior, security threats, fraudulent activity and

more.

Splunk collects machine data securely and reliably from wherever it’s generated. It stores and

indexes the data in real time in a centralized location and protects it with role-based access

controls. Splunk lets you search, monitor report and analyze your real-time and historical data.

Now you have the ability to quickly visualize and share your data, no matter how unstructured,

large or diverse it may be.

Troubleshoot application problems and investigate security incidents in minutes instead of

hours or days, avoid service degradation or outages, deliver compliance at lower cost and gain

new business insights. With Splunk you can gain rapid visibility, insights and intelligence for IT

and business.

Splunk Capabilities

Collect and Index Any Machine Data. Splunk collects and indexes machine data in real time from

virtually any source, format or location. This includes live data from your packaged and custom

applications, app servers, web servers, databases, networks, virtual machines, telecoms

equipment, OS’s and more.

No matter the source or format, Splunk indexes it the same way—without custom parsers or

connectors to purchase, write or maintain. Once in Splunk, all your machine data is available for

troubleshooting, security incident investigations, network monitoring, compliance reporting,

business analytics and other valuable uses. As your data needs grow, Splunk scales efficiently

using commodity hardware.

Search and Investigate. Search and analyze real-time and historical machine data from one place

with Splunk. Search for specific terms or expressions. Use Boolean operators to refine your

search. Trace transactions across multiple systems.

Powerful statistical and reporting commands let you update transaction counts, calculate

metrics and look for specific conditions within a rolling time window. Search Assistant offers

type-ahead and contextual help so that you can access the full power of the Splunk search

language.

Page 192: Siem & log management

192 | P a g e

Interact with your search results in real time. Zoom in and out on a timeline to quickly reveal

trends, spikes and anomalies. Click to drill down into results and eliminate noise to find the

needle in the haystack. Whether you’re troubleshooting or investigating an alert, you’ll find the

answer in seconds or minutes rather than hours and without escalating to other groups. Real-

time search and alerting means you can correlate analyze and respond to real-time events.

Track live transactions and online activity, see and respond to incidents and attacks as they

occur, monitor application SLAs in real time.

Add Knowledge. Splunk automatically discovers knowledge from your machine data at search

time so you can start using new data sources immediately. You can add context and meaning to

your machine data by identifying, naming and tagging fields and data points. Add information

from external source asset management databases, configuration management systems and

user directories, making the system smarter for all users.

Monitor and Alert. Turn searches into real-time alerts that automatically trigger actions such as

sending automated emails, running remediation scripts or posting to RSS feeds. Alerts can also

send an SNMP trap to your system management console or generate a service desk ticket. Alerts

can be set to any level of granularity and can be based on a variety of thresholds, trend based

conditions and complex patterns, such as abandoned shopping carts, brute force attacks and

fraud scenarios.

Page 193: Siem & log management

193 | P a g e

Report and Analyze. Quickly build advanced charts, graphs and dashboards that show important

trends, highs and lows, summaries of top values and frequency of occurrences. Create robust,

information-rich reports from scratch without any advanced knowledge of search commands.

Drill down from anywhere in the chart to the raw events. Save reports, integrate them into

dashboards and view them all from your desktop or mobile device. Create PDFs on a scheduled

basis to share with management, business users or other IT stakeholders.

Create Custom Dashboards and Views. Create live dashboards in a few clicks using the

dashboard editor. Dashboards integrate multiple charts and views of your real-time data to

satisfy the needs of different users, such as management, business or security analysts, auditors,

developers and sysadmins. Users can edit dashboards using a simple drag and drop interface

and change chart types on-the-fly with integrated charting controls.

Splunk Apps. Create apps on Splunk that deliver a targeted user experience for different roles

and use cases. You can share and reuse apps within your organization and the rest of the Splunk

community. There are a growing number of apps available on our community site

(www.splunkbase.com), built by our community, partners and Splunk. Apps that help visualize

data geographically, or that provide pre-canned compliance views; apps for different

technologies such as Windows, Linux, UNIX, virtualization, networking and more.

Scale to the Largest IT Infrastructures. The Splunk architecture is based on Map Reduce and

scales linearly across commodity hardware as data volumes grow. Start small on a single

commodity Windows, Linux or UNIX server and then deploy Splunk across multi-geography,

multi-datacenter infrastructures generating tens of terabytes of data per day.

Security is important and role-based access controls govern how far a user’s search can extend.

Regional users can see data from the systems within their region and enterprise wide users can

reach all datacenters. The Splunk vision is for every authorized employee to have the data view

they need—whether for investigations, reports and dashboards, or analysis to improve IT

operations and gain valuable business insights.

Secure Data Access and Single Sign-on. At the core of Splunk is a robust security model. Every

Splunk transaction is authenticated, including system activities and user activities through web

and command line interfaces. Splunk also integrates with LDAP-compliant directory servers and

Active Directory to enforce enterprise-wide security policies.

Single sign-on integration enables pass-through authentication of user credentials. Since all of

the data you need to troubleshoot, investigate security incidents and demonstrate compliance

persists in Splunk, you can safeguard access to your sensitive production servers.

Page 194: Siem & log management

194 | P a g e

It’s Software; Download it and Install it in Minutes. Splunk is enterprise software made easy. Try

Splunk on your laptop and then deploy it to one or more datacenters. You’re up and running

with a web interface for users and a powerful engine for indexing your machine data.

Page 195: Siem & log management

195 | P a g e

Log Management in the Cloud 1. Basic Practices

2. Identify Your Goal

Basic Cloud Based Log Management – Data Flow 1. Pushing logs into the cloud

a. Configuring sensors and agents to collect the logs and send them to a Software-

As-A-Service(SaaS) listening managed or dedicated service/appliance in the

cloud, making sure it is done over a secure channel

2. VPN Site To Site from the organization to the cloud

a. Virtual Appliance Running on your IaaS (Infrastructure As A Service) allows you

to collect logs and monitor both your enterprise and cloud infrastructure

b. A physical/virtual appliance running on your enterprise and monitors your cloud

infrastructure

Page 196: Siem & log management

196 | P a g e

Basic Cloud Based Log Management – Players

LogEntries

Page 197: Siem & log management

197 | P a g e

Papertrail

Page 198: Siem & log management

198 | P a g e

Page 199: Siem & log management

199 | P a g e

Page 200: Siem & log management

200 | P a g e

Page 201: Siem & log management

201 | P a g e

SumoLogic

Page 202: Siem & log management

202 | P a g e

Page 203: Siem & log management

203 | P a g e

Log Logic

Page 204: Siem & log management

204 | P a g e

Page 205: Siem & log management

205 | P a g e

Page 206: Siem & log management

206 | P a g e

Page 207: Siem & log management

207 | P a g e

Managed Security Services – Top World Providers Analysis

Page 208: Siem & log management

208 | P a g e

Page 209: Siem & log management

209 | P a g e

VMware’s Terramark provides cloud based SIEM (MSS –Managed

Security Services), Integrates with Tripwire for online integrity and data

monitoring and Provides Cloud and On-Site forensics

Page 210: Siem & log management

210 | P a g e

Page 211: Siem & log management

211 | P a g e

IBM Managed Security Services

Page 212: Siem & log management

212 | P a g e

Trustwave Managed SIEM

Page 213: Siem & log management

213 | P a g e

StillSecure - Cloud Security Services Platform

Page 214: Siem & log management

214 | P a g e

CloudAccess Cloud Based SIEM

Page 215: Siem & log management

215 | P a g e

RSA Envision - Cloud Managed SIEM Providers

Page 216: Siem & log management

216 | P a g e

Page 217: Siem & log management

217 | P a g e

Page 218: Siem & log management

218 | P a g e

HP ArcSight - Cloud Managed SIEM Providers

Page 219: Siem & log management

219 | P a g e

Page 220: Siem & log management

220 | P a g e

Rackspace Cloud Monitoring

Page 221: Siem & log management

221 | P a g e

Cloud Based Alerting Infrastructure

Page 222: Siem & log management

222 | P a g e

Locate Resources

Page 223: Siem & log management

223 | P a g e

Cloud Based Anomaly Detection – Anomaly Checking vs. Other Organizations

Incident Response, Notification, and Remediation

The nature of Cloud Computing makes it more difficult to determine who to contact in case of a

security incident, data breach, or other event that requires investigation and reaction. Standard

security incident response mechanisms can be used with modifications to accommodate the

changes required by shared reporting responsibilities. This domain provides guidance on how to

handle these incidents.

The problem for the cloud customer is that applications deployed to cloud fabrics are not always

designed with data integrity and security in mind. This may result in vulnerable applications

being deployed into cloud environments, triggering security incidents. Additionally, flaws in

infrastructure architecture, mistakes made during hardening procedures, and simple oversights

present significant risks to cloud operations. Of course, similar vulnerabilities also endanger

traditional data center operations.

Technical expertise is obviously required in incident handling, but privacy and legal experts have

much to contribute to cloud security. They also play a role in incident response regarding

notification, remediation, and possible subsequent legal action. An organization considering

using cloud services needs to review what mechanisms have been implemented to address

questions about employee data access that is not governed by user agreements and privacy

Page 224: Siem & log management

224 | P a g e

policies. Application data not managed by a cloud provider’s own applications, such as in IaaS

and PaaS architectures, generally has different controls than data managed by a SaaS provider’s

application.

The complexities of large cloud providers delivering SaaS, PaaS, and IaaS capabilities create

significant incident response issues that potential customers must assess for acceptable levels of

service. When evaluating providers it is important to be aware that the provider may be hosting

hundreds of thousands of application instances. From an incident monitoring perspective, any

foreign applications widen the responsibility of the security operations center (SOC). Normally a

SOC monitors alerts and other incident indicators, such as those produced by intrusion

detection systems and firewalls, but the number of sources that must be monitored and the

volume of notifications can increase exponentially in an open cloud environment, as the SOC

may need to monitor activity between customers as well as external incidents.

An organization will need to understand the incident response strategy for their chosen cloud

provider. This strategy must address identification and notification, as well as options for

remediation of unauthorized access to application data. To make matters more complicated,

application data management and access have different meanings and regulatory requirements

depending on the data location. For example, an incident may occur involving data in Germany,

whereas if the same data had been stored in the US it might not have been considered an issue.

This complication makes incident identification particularly challenging.

Recommendations

Cloud customers need to clearly define and communicate to cloud providers what they

consider incidents (such as data breaches) versus mere events (such as suspicious intrusion

detection alerts) before service deployment.

Cloud customers may have very limited involvement with the providers’ incident response

activities. Therefore it is critical for customers to understand the prearranged

communication paths to the provider’s incident response team.

Cloud customers should investigate what incident detection and analysis tools providers use

to make sure they are compatible with their own systems. A provider’s proprietary or

unusual log formats could be major roadblocks in joint investigations, particularly those that

involve legal discovery or government intervention.

Poorly designed and protected applications and systems can easily overwhelm everyone’s

incident response capabilities. Conducting proper risk management on the systems and

utilizing defense-in-depth practices are essential to reduce the chance of a security incident

in the first place.

Security Operation Centers (SOC) often assume a single governance model related to

incident response, which is inappropriate for multi-tenant cloud providers. A robust and well

Page 225: Siem & log management

225 | P a g e

maintained Security Information and Event Management (SIEM) process that identifies

available data sources (application logs, firewall logs, IDS logs, etc) and merges these into a

common analysis and alerting platform can assist the SOC in detecting incidents within the

cloud computing platform.

To greatly facilitate detailed offline analyses, look for cloud providers with the ability to

deliver snapshots of the customer’s entire virtual environment – firewalls, network

(switches), systems, applications, and data.

Containment is a race between damage control and evidence gathering. Containment

approaches that focus on the confidentiality-integrity-availability (CIA) triad can be effective.

Remediation highlights the importance of being able to restore systems to earlier states,

and even a need to go back six to twelve months for a known-good configuration. Keeping

legal options and requirements in mind, remediation may also need to support forensic

recording of incident data.

Any data classified as private for data breach regulations should always be encrypted to

reduce the consequences of a breach incident. Customers should stipulate encryption

requirements contractually, per Domain 11.

Some cloud providers may host a significant number of customers with unique applications.

These cloud providers should consider application layer logging frameworks to provide

granular narrowing of incidents to a specific customer. These cloud providers should also

construct a registry of application owners by application interface (URL, SOA service, etc.)

Application-level firewalls, proxies, and other application logging tools are key capabilities

currently available to assist in responding to incidents in multi-tenant environments.

What's in a cloud security plan? Q1 Labs’ CSO, Chris Poulin, recently authored a paper defining best practices for IT Security in a

cloud environment. In this, he covers some interesting viewpoints on various hurdles expected

when organizations secure their public or private cloud environments, as well as the steps

necessary to create an effective security policy, and the similarities between SIEM and cloud

environments.

Since this is a week of two major cloud related conferences - VMworld and Dreamforce – let’s

talk cloud security!

What are a few of the steps cloud providers and customers can take when building out their

own cloud security plan? One major chunk of the process is to start with an assessment of risk.

That is, understand your current data types, locations, business processes, and information flow.

Understand where the critically sensitive data is. Just like any other enterprise, cloud computing

Page 226: Siem & log management

226 | P a g e

requires customers and cloud providers to define their own information topology before any

reasonable security policy can be defined and implemented.

Step 1: Discovery

Know where all of your data is, no matter how you classify it. The key is uncovering the

difference between the data that can and cannot be housed in the cloud. An eDiscovery process

is recommended to locate buried and even misplaced data. Too often organizations find that

Personally Identifiable Information (PII) is mixed with less critical data and matched with the

wrong security protocols.

Step 2: Classification

After understanding where your data is, it needs to be classified appropriately and distributed to

systems with security controls to match the data sensitivity. This step alone can help you make

progress meeting various compliance regulations.

Step 3: Data transit

SIEM can help define your data transit policy by monitoring endpoints, firewalls, and network

activity to govern if the data should be allowed to proceed to the cloud or not. Content-aware

network profiling from Data Loss Prevention (DLP) solutions can fed to the SIEM to perform

more complex correlations with other data feeds. For example, watch for PII such as a social

security number in a patient healthcare record and combine that with the firewall logs and

network activity found within a SIEM to gain a bigger picture of malicious activity.

As Chris Poulin has blogged, there is no question that more modern SIEM (a.k.a. Security

Intelligence) solutions have their place in the cloud. It’s not a matter of if SIEM is ready for the

cloud, but if the cloud is ready for SIEM.

Page 227: Siem & log management

227 | P a g e

Managed Cloud-Based SIEM Service

Protect Your Business Globally with Uncompromising Security Information You Can Act On

Business Drivers

These days, companies face relentless and increasingly sophisticated security attacks on their

corporate networks. Ongoing collection, correlation and analysis of security data so that you can

act on it is vital in meeting compliance, fending off threats and protecting critical corporate

resources.

Challenges and Needs

With network security attacks on the rise, businesses must be proactive in strengthening their

security infrastructure. Given the serious consequences for corporate networks that remain

vulnerable, resorting to a do-it-yourself approach to security may expose a company to

significant and irreversible risk.

Implementing strong corporate security is a substantial undertaking, and requires an airtight

security alerts system, a strategy for handling escalations efficiently, and a carefully planned

incident-ticketing process for tracking and following up on security-related events.

The inevitable onslaught of alerts events can create headaches for companies trying to manage

their own security information and event management solutions. Differentiating legitimate

threats from false positives is time-consuming and expensive. On top of that, assembling,

analyzing, and reporting on the millions of log entries that are generated across various devices

is an overwhelming time sink and takes valuable IT resources away from their priority-one tasks.

Managed Cloud-Based SIEM Service Overview

Virtela's Managed Cloud-based Security Information & Event Management (SIEM) service

delivers powerful security tools and expertise to help protect your business-critical server

applications and network resources across multiple devices all day, every day-so you can

dedicate vital company time and expenses to keeping your business operating at its best.

Managed Cloud-Based SIEM service features include:

Managed, cloud-based service. Enabled by Virtela Enterprise Services Cloud (ESC), Managed

Cloud-based SIEM service saves you time and money, with its instant service activation

and elimination of up-front investments in hardware. Plus, you don't get bogged down

in deploying and managing your own SIEM platform. Alternatively, if you already have

an SIEM platform, Virtela can manage your current implementation and provide

ongoing SIEM service.

Page 228: Siem & log management

228 | P a g e

Proactive device monitoring. There's no need to hire extra staff or train your existing staff on

the latest vendor technologies. Let Virtela continuously manage, monitor, and

troubleshoot network health and performance across your entire infrastructure every

minute of the day, and night.

Regulatory reporting capabilities. Gain confidence in your auditing tasks with tools that help

you create reports which are in line with compliance requirements for Sarbanes-Oxley,

HIPAA, GLBA, PCI, CA SB-1386, and others. The service provides real-time investigation

of security events to determine which events should be acted upon-a requirement to

meet increasingly more stringent compliance audits. It provides accurate visibility into

the entire network landscape, not just for specific devices.

Individual asset valuation. As part of the service, Virtela engineers work closely with you to

assign a value to assets such as a device, a network subnet, or a location (e.g., data

center) based on the potential business impact of the asset being negatively impacted,

to the point of compromise or being unavailable for an extended period of time. The

service then identifies specific events that generate alerts in real time and puts into

place an action plan uniquely developed based on the asset values. This customized

approach puts your company in a great position to safeguard against attacks, without

requiring in-house security expertise.

Integrated with any Virtela security service. Virtela integrates SIEM and log retention and

management tools with any of its security services to give you the necessary event-

correlation capabilities and reporting.

Exceptional customer support. Virtela is committed to going the extra mile and doing what it

takes to provide the best service for its customers. With a business model that's carrier,

vendor and technology independent, you can count on flexible, customized solutions--

integrating the best partners to meet your specific needs. Virtela maintains an

impressive 99 percent rate for opening trouble tickets proactively and responds quickly

with 12-second average speed to answer time. And you can be assured of a Tier 1

Engineer technical support.

Business Benefits

The key benefits to your business are as follows:

Save money. Save the cost of purchasing your own SIEM platform as well as the ongoing

expense of training and support staff to maintain and manage these security functions.

Comply with regulations. Meet compliance with security audit reports as well as real-time

investigation of security events to determine which events should be acted upon.

Become more efficient. The service allows you to concentrate on your core business and

focus on strategic IT initiatives while having the peace of mind that you have visibility to

the security threats and health of your entire IT infrastructure and applications.

Page 229: Siem & log management

229 | P a g e

SIEM in the Cloud

We are creating data at a rate beyond our wildest dreams. Last year, 161 exabytes (Exabyte is a

billion gigabytes) of digital information were created, roughly 3 million times the information in

all the books ever written. At this rate the "digital universe" could balloon to 1,800 exabytes by

2011. This proliferation of data brings a corresponding need for data security especially in highly

regulated industries such as healthcare, retail, banking, government, and utilities.

And as this increase in data continues so does the requirements to store and secure it. Enter

"The Cloud." Cloud computing is growing rapidly as organizations of all sizes try and deal with

the exponential increase in storage and on-demand access while at the same time, keeping

costs down and availability up. Cloud computing has become a viable option with dealing with

these issues. However, as with any new technology, new threats and security considerations

also are introduced.

Security Information and Event Management (SIEM) technology is critical in securing

organizations, their data and infrastructure as it moves to the cloud. nFX addresses protecting

this overwhelming volume of data, identifies threats in real-time, and ensures the rigors of

complex regulatory compliance mandates are met.

Our solutions transform volumes of noisy, low-level security event information generated by

security and network devices, applications, and operating systems into information that can be

quickly and easily understood by security analysts and IT staff. Using data collection,

aggregation, normalization and correlation technology, SIEM collects volumes of diverse data

from devices and applications across the network and across the cloud - and transforms it into

real-time actionable security intelligence. And with this whole new level, breadth and quality of

security decision support, organizations can more easily and cost effectively reduce risk, address

compliance, and assure business continuity.

Enterprise SIEM in the Cloud

Any cloud-based solution must have the processing horsepower to ensure reliable and secure

data flow, yet be flexible enough to meet the needs of small departments, all the way up to

globally dispersed operations centers.

nFX SIM One is the only SIEM solution built on a robust multi-tier architecture that can scale to

deliver 24x7 SIEM support across a complex, distributed, and heterogeneous environment at a

low total cost of ownership. nFX SIM One architecture forms a backbone to guarantee users

reliable access to rich SIEM functionality, including comprehensive correlation, dynamic threat

visualization, reporting and analytics, an embedded security knowledgebase, and an integrated

incident resolution management workflow.

Page 230: Siem & log management

230 | P a g e

SIEM for Managed Security Service Providers

Faced with the daunting task of monitoring mass volumes of data to prevent data loss, theft and

destruction - it's no wonder more and more companies are outsourcing security to save money

and efficiently scale their services. SIEM is a natural fit to the outsourcing model.

Today's Managed Security Service Providers are ideally positioned to take advantage of the

double-digit annual increases in demand in cloud security services that are expected over the

next several years. Outsourced SIEM offers service providers a way to capitalize on the

exploding demand for security monitoring and compliance services, while helping to maintain

customers by fostering a relationship of trust and growth.

Selecting the right SIEM partner is a tricky task for any MSP. Not every SIEM vendor understands

the MSSP business model. Not every SIEM vendor has architected their product to support

secure multi-tenancy and the efficient sharing of resources. Not every vendor scales gracefully.

And most SIEM vendors haven't thought through the ins and outs of an MSP's reporting

requirements.

NetForensics can and does.... SIEM in the Cloud

NetForensics SIEM powers some of the biggest names in cloud computing, yet we also

understand what it takes for smaller companies to get started and succeed with cloud SIEM. For

almost a decade, our cloud solutions have helped rapidly deliver high-ROI services around threat

monitoring, mitigation, log management, and reporting. Cloud Computing providers of all sizes

around the world have come to rely on netForensics technology to gain unparalleled security

visibility and help maintain compliant operations.

And now as your business grows, and your demands grow - you can scale your security

information management efficiently and cost effectively.

Learn more about netForensics' Cloud SIEM options,

nFX SIM One, built for high-end SIEM applications appropriate for enterprise data center environments and

cloud services providers

nFX Cinxi One, which provides an integrated log management solution along with correlation and

remediation functions, tailored to MSP's, SMB's or departmental needs.

NetForensics powers the SIEM services of some of the world's largest Remote Operations Centers and a

variety of services are available for private-label by other cloud providers.

Page 231: Siem & log management

231 | P a g e

A Modern SIEM: IT Security Intelligence

IT Security Intelligence SIEM:

QRadar SIEM provides an integrated network security solution that converges typically siloed

network and security information into a single, cohesive system. QRadar SIEM's unique

approach enables organizations to deliver an unparalleled set of network security intelligence

services, including:

Log management

Threat/Fraud management

Compliance management

Security Event and Information Management

User Activity Monitoring

Application Monitoring

QRadar Log Manager:

QRadar Log Manager provides a comprehensive, turnkey log management solution for

organizations of all sizes. Log management has emerged as a required part of delivering security

best practices and meeting specific auditing and reporting requirements of government

regulations, including:

Payment Card Industry Data Security Standards (PCI DSS)

GCSX Code of Connection (CoCo)

Garante

FSA

Sarbanes-Oxley (SOX),

Health Insurance Portability and Accountability Act (HIPAA),

North American Electric Reliability Corp. (NERC),

Federal Energy Regulatory Commission (FERC),

Federal Information Security Management Act (FISMA)

QRadar Log Manager can also be easily upgraded via a software license key to the full-featured

QRadar SIEM, meaning you won't have to worry about losing of data or purchasing &

installing additional hardware.

QRadar Risk Manager:

QRadar Risk Manager provides organizations with a comprehensive IT security intelligence

solution, allowing them to get not only the forensics of the "during" and "after" an attack, but

also enabling them to answer the "What if?" ahead of time, thereby minimizing the risk on their

networks, their operations and ultimately protect their organizations' brand and intellectual

property.

Page 232: Siem & log management

232 | P a g e

QRadar Risk Manager leverages and extends the value of a SIEM deployment to greatly

improve your organization's ability to automate risk management functions in mission critical

areas including network and security configuration, threat modeling and simulation, compliance

management and vulnerability assessment.

High Availability solution that delivers continuous network security monitoring:

QRadar's security information and event management (SIEM) solution is purposely built to

integrate log management with SIEM, delivering massive log management scale without any

compromise on SIEM "Intelligence". QRadar’s' easy-to-deploy high availability (HA) appliances

provide fully automated failover and disk synchronization for high availability of data collection

and analysis capabilities without the need for third-party fault management products. With

QRadar's HA solution, high availability for data storage, analysis and user interfaces is achieved

through easy-to-deploy and manage appliances.

Network Activity Collectors:

QRadar's Network Activity Collectors offer a cost-effective solution for gathering the most

sophisticated and actionable network intelligence (flow data) available from your network.

Network Activity "Qflow" Collectors provide Layer 7 analysis as well as aggregation of other flow

sources including Jflow, Netflow, SFlow, and Packeteer's Flow Data Records, delivering an

unmatched level of IT security intelligence for the most complete collection of activity possible.

Virtual Activity Collectors:

Like QRadar's Network Activity Collectors, QRadar's Virtual Activity Collectors offer a cost-

effective solution for gathering the most sophisticated and actionable network activity data

available from your network including Layer 7 analysis and aggregation of external flow sources

but also, by providing unique visibility into the activity within your virtual environment.

EventTracker Cloud

Page 233: Siem & log management

233 | P a g e

EventTracker Cloud is an enterprise class SIEM and “Security monitoring as a Service” (SecaaS)

solution. It enables organizations to meet compliance regulations and respond to security and

operational incidents in real-time. Powerful real-time incident and reporting dashboards allow

you to a have 360 degree view of IT security and operations – all provided through a fully secure

cloud. OS logs, network logs, application and database logs, IPS/IDS logs, and change logs are

forwarded to a hosted system in real-time. All the data can be compressed and encrypted

before transfer and securely stored and archived. Threats and security incidents are detected

and addressed within minutes. It will guarantee that you get the full benefit of cloud services,

without compromising your security or compliance requirements.

Benefits:

Customized Role-Based Dashboards provide detailed information for compliance

auditors, CIO’s, CISO’s, security officers, system administrators and Help Desk personnel.

24×7 Security and Operations Monitoring provide detailed information on the IT

environment answering the question “What’s happening?” to allow organizations to

detect and avoid threats.

Daily/Weekly Reporting answers the question “What has happened?” to allow for

detailed analysis to analyze events and avoid future risk.

Change Control and File Integrity Monitoring answers the question “What is different?”

to control the IT environment and prevent risks.

Simple “Pay As You Go” Pricing eliminates capital expenditures and maintenance costs.

Rapid Deployment allows organizations to implement a SIEM and log management

solution quickly and easily.

What EventTracker Cloud Provides

Automated and Continuous Regulatory Compliance for FFIEC, GLBA, SOX, PCI-DSS,

HIPAA, FISMA, NERC, COCO, ISO 27001/27002/17799 etc.

Security Information & Event Management (SIEM)

Centralized Log Management

Operation and Performance Monitoring

File Integrity Monitoring

Configuration Assessment

Reduce IT Costs by 10% to 25% Every Year

How it works in 7 Steps

EventTracker Cloud has a highly scalable and secured architecture with seven steps:

Collect and Store Log Data and Change Data

Integrate Supported Devices

Correlate, Alert and Take Remedial Action

Page 234: Siem & log management

234 | P a g e

Report and Search

Monitor Operations & Security

Maintain and Show Evidence of Compliance to Auditors

Analytics and Visualization

EventTracker-Cloud Deployment Options

EventTracker Cloud is a highly scalable SIEM and log management solution that offers several

deployment options to meet the needs small organizations with a few dozen critical systems, as

well as larger organizations with thousands of systems spread across multiple locations. Each

organization’s data is separate and secure, and can only be accessed by authorized personnel.

To meet the specific security policies and business needs of your organization, EventTracker

Cloud has three distinct deployment methodologies:

Shared Cloud: Designed for small to medium enterprises

Virtual Private Cloud: Designed for larger organizations or those with multiple locations

Monitored Services: Meets the needs of organizations requiring outsourced support for

their SIEM and log management solution

Shared Cloud MSSP: Enables MSSP’s to support their customer’s log management

needs.

Pricing

Pricing for the EventTracker Cloud solution is based on the number devices that an organization

wishes to monitor, and the level of monitoring and reporting services needed. Basic services

start as low as $1000/month for most customers.

Page 235: Siem & log management

235 | P a g e

LogMojo Logging & Archival for FortiGate Firewalls

LogMojo is the only solution you need for Logging, Log Archiving, and Content Archiving for

FortiGate Firewalls. LogMojo receives your FortiGate's logs through a secure tunnel utilizing the

native FortiAnalyzer protocol.

Logs are securely received, time stamped, and archives are hashed to ensure raw logs remain

forensically sound as long as they are needed. The hashed archives are stored and are available

for download through logMojo's web interface with a 7 year retention period.

Page 236: Siem & log management

236 | P a g e

LogMojo Real Time Alerting for FortiGate Firewalls

Need simple reliable alerting about your FortiGate Firewall or complex alerting based upon

detailed events - logMojo is the answer! LogMojo provides hundreds of alert templates covering

everything from system down notifications to inappropriate web surfing.

Just like logMojo’s Dynamic Drill down Real Time Reporting, its Alerting system allows you to use the

preconfigured alert templates with a number of filters including IP Addresses, Interfaces, Users,

Groups and many more to create almost unlimited altering configurations.

Alerts are able to be delivered to your email address, text message, or just the Altering Archive.

The Altering Archive provides an interface to review all of the events which generated alert

conditions. Alert deliveries via email are fully throttled based upon user configurable settings.

Try Before You Buy

1. SaaS availability

2. Timeliness of log data showing up in system

3. Timeliness of log data analysis

4. Regulatory compliance

5. Prompt upgrades to support new attack vectors

6. Prompt upgrades to support upgrades to hardware and software

Page 237: Siem & log management

237 | P a g e

Questions for the Cloud Provider

1. Is It Safe

2. How Is It Transported

3. How Are Keys Stored

4. How Often Is the Data Transmitted?

5. What Is the Level of Compression and Bandwidth Utilization?

6. What Backup and Redundancy Is Included

7. What Are the Responding Options?

8. How Much Of the Data Is Actively Searchable

9. How Much Of the Data Is Stored

10. What Log Data Will Be Accepted

11. How Are Its Instructions for Setting up Devices

12. How Are Alerts Determined

13. How Quickly Does Processing Occur

14. How Often Are the Alerts Updated?

15. How Are System Upgrades Handled

Considerations for In-House Log Management 1. Could a Personal Change Ruin Your Log Management Process

2. Will Your Staff Monitor the Logs Regularly and Maintain Updates

3. Roll Your Own or Buy an Appliance

Summary The largest single change in the 2008 SANS Log Management survey was how the Global 2000

market responded to the question of how much they spent on log file analysis. In the 2007

survey, 59% of respondents chose “No clue, we look at logs as needed or when there is an

issue.” In 2008, only 14 percent chose that option. This is a strong indication of a growing

awareness of the need for log management. The largest group indicated that they spent under

$25,000 annually on log analysis. The average spending on log file analysis by the Global 2000

works out to $190,000 each, or a $380 million annual market. It is a market holding steady with

2007, in which we estimated the market at $374 million.

While survey responses clearly indicate that organizations are becoming more aware of the

value of log data in security, compliance and maintenance operations, they still have a long way

to go. Sixty-two percent of overall respondents say that ROI is not used as a measure. That

means that the vast majority are not even looking for ROI, even though in one case logging was

used for billing, a cost center activity.

Page 238: Siem & log management

238 | P a g e

Organizations are looking for ways to increase their visibility into their log data, putting

demands on tools vendors to continually improve integration, analysis and storage capabilities.

In the meantime, many companies are tackling the log management problem with a

combination of homegrown and commercial tools as they begin to make use of the valuable

data lingering in applications and devices across their organizations.