IncMan SOAR Platform Features – New and Improved

DFLabs is excited to announce the latest release of its industry-leading Security Orchestration, Automation and Response platform, IncMan version 4.3.  Solving customer’s problems and adding value to our customer’s security programs is one of our core goals here at DFLabs and this is reflected in our 4.3 release with over 100 enhancements, additions, and fixes; many suggested by customers, all designed to make the complex task of responding to potential security incidents faster, easier and more efficient.

IncMan 4.3 includes many new bidirectional integrations from a variety of product categories including threat intelligence, malware analysis, ticket management and endpoint protection, chosen to broaden the orchestration and automation capabilities of our customers.  These new bidirectional integrations include:

With IncMan 4.3, we have also greatly enhanced the flexibility of our R3 Rapid Response Runbooks with the addition of two new decision nodes; Filter and User Choice.  Filter nodes allow users to further filter and refine information returned by previously executed integrations; for example, filtering IT asset information to include only servers, focusing on key assets first.  Unlike automated Enrichment actions, automated Containment actions could have serious unintended impacts on the organization. User Choice nodes allow users to minimize this risk by allowing them to define critical junctions in the workflow at which a human must intervene and make a decision.  For example, human verification may be required before banning a hash value across the enterprise or quarantining a host pending further analysis.

incman soar platform

Improvements to our patent-pending Automated Responder Knowledge (DF-ARK) module allow IncMan to make even more intelligent decisions when suggesting response actions, and enhancements to IncMan’s correlation engine allow users a more advanced view of the threat landscape over time and across the organization.  IncMan’s report engine has been significantly bolstered, allowing users to create more flexible reports for a variety of purposes than ever before. Finally, numerous changes have been made to IncMan’s Dashboard and KPI features, allowing users to create more actionable KPIs and gather a complete picture of the organization’s current state of security at a moment’s glance.

These are just some of the highlights of our latest IncMan release; IncMan 4.3 includes many other enhancements designed to streamline your orchestration, automation and response process.  If you would like a demo of our latest release, please go to our demo request site. Stay tuned to our website for additional updates, feature highlights,  and demos of our latest release.

How DFLabs IncMan Tackles Meltdown and Spectre Vulnerabilities

Following on from my recent blog post entitled “Meltdown and Spectre – What They Mean to the Enterprise” published in January, I wanted to take a closer look at how these types of hardware vulnerabilities could (and should) easily be detected, managed and mitigated using Security Orchestration, Automation and Response (SOAR) technology, for example with a platform such as IncMan from DFLabs.

Using Meltdown and Spectre as a use case, I wanted to enlighten you about the automated processes an organization can undertake.  There are many pros and cons for using automation, but if used in the correct way it can significantly improve Security Operations Center (SOC) efficiencies, saving security analyst many man hours of mundane tasks.  Alerts can also potentially be responded to and contained before an analyst has even been notified.  Using IncMan’s integrations and R3 Rapid Response Runbooks, SOCs can quickly respond to such an alert when a vulnerability is detected.  The overall goals would be as follows, in order to reduce the risk these vulnerabilities present to the organization.

1)  Automatically receive alerts for the host which have been identified as being vulnerable to Meltdown or Spectre.

2) Create an Incident and perform automated Notification, Enrichment and Containment tasks.

Implementation

Let’s move on to the implementation stages.  Where should you start? For ease I will break it down into 3 simple sections, creating a runbook, utilizing the rebook and seeing the runbook in action.  So, let’s begin…

Creating an R3 Rapid Response Runbook

The first step in reducing the risk from the Meltdown and Spectre vulnerabilities is to create a runbook to handle alerts for newly detected vulnerable hosts.  In this use case, we will use integrations with Jira, McAfee ePO, McAfee Web Gateway, MSSQL Server and QRadar to perform Notification, Enrichment and Containment actions; however, this can easily be adapted to include any other technology integrations as well.

 

Meltdown and Spectre Vulnerabilities

 

Using a Jira Notification action, a new Jira issue is created.  This Notification action should notify the IT or Infrastructure teams and initiate the organizations’ normal vulnerability management process.

Next, an MSSQL Server Enrichment action is used to query an IT asset inventory for the host name of the vulnerable host, which is passed to the runbook automatically when the incident is created.  This asset information is then available to the analyst for further review.

Once the IT asset information is retrieved, a decision point is reached.  If the IT asset information indicates that the host is a server, one path (the top path) is taken.  If the IT asset information indicates that the host is not a server, another path (the bottom path) is taken.

If the asset is determined to be a server the Jira Enrichment action is used to update the Jira issue, informing the appropriate parties that the host has been determined to be a server and should be treated as a higher priority.  Next, two McAfee ePO Enrichment actions are performed.  The first Enrichment action queries McAfee ePO for the system information of the given host name, providing the analyst with additional information.  The second Enrichment action uses McAfee ePO to tag the host with the appropriate tag.  Finally, a Task is added to IncMan reminding the analyst to follow up with the appropriate teams to ensure that the vulnerability has been appropriately mitigated.

If the asset is determined not to be a server, the two previously mentioned McAfee ePO Enrichment actions are immediately be run (System Info and TAG).  Following these two Enrichment actions, a McAfee Web Gateway Containment action is used to block the host from communicating outside of the network.  This Containment step is completely optional but is performed here on non-servers only to minimize the Containment action’s potential impact on critical systems.

Utilizing the R3 Rapid Response Runbook

Once the new runbook is created, IncMan must be told how and when to automate the use of this runbook.  This is achieved by creating an Incident Template, which will be used any time an incident is generated for a Meltdown or Spectre vulnerability.  Through this incident template, critical pieces of information such as Type, Summary, Category can be automatically applied to the newly created incident.

 

Meltdown and Spectre Vulnerabilities 1

 

From the Runbook tab of the Incident Template wizard, the previously created Meltdown and Spectre runbook is selected and set to autorun.  Each time this template is used to generate an incident, the appropriate information such as host name and host IP address will be used as inputs to the runbook and the runbook will be automatically executed.

 

Meltdown and Spectre Vulnerabilities 2

 

In this use case, alerts from QRadar are utilized to initiate automatic incident creation within IncMan.  However, another SIEM integration, syslog or email could also be utilized to achieve the same outcome.  A new QRadar Incoming Event Automation rule is added and the defined action is to generate a new incident from the previously created Meltdown and Spectre Incident Template.

 

Meltdown and Spectre Vulnerabilities

 

Solution in Action

When a QRadar Alert is generated matching the criteria defined for a Meltdown or Spectre vulnerability detection, IncMan will automatically generate a new incident based on the Meltdown and Spectre Incident Template.

 

Meltdown and Spectre Vulnerabilities

 

Without requiring any action on the part of an analyst, the Meltdown and Spectre runbook is automatically initiated, performing the defined Notification, Enrichment and Containment actions.(In the example shown here, the ‘server’ path is taken).

 

Meltdown and Spectre Vulnerabilities 5

 

Conclusion

How easy was that?  The entire process has taken place in a matter of minutes, likely before anyone has even had time to acknowledge the alert.  As an analyst begins to manually examine the alert, many of the mundane tasks have already been completed, allowing the analyst to focus on the tasks which require human intervention and reducing the time required to remediate this issue, ultimately reducing risk to the organization.

IncMan has over 100 customizable playbooks for similar use cases like this.  If you would like to see IncMan in action, please do feel free to request a demo.

100-Day Countdown to GDPR

For many of us around the world February 14th marks St. Valentine’s Day, but for those of us in Europe, this date also marks the beginning of the 100-day countdown to the upcoming enforcement of the General Data Protection Regulation (GDPR).

As most of us are already aware the EU GDPR was adopted in April 2016 and is due to be formally imposed on May 25th, 2018. In a nutshell for those who are not quite so GDPR savvy, the GDPR emphasizes transparency, security, and accountability by data controllers and introduced mandatory Data Protection Impact Assessments (DPIAs) for those organizations involved in high-risk processing. For example, where a new technology is being deployed, where a profiling operation is likely to significantly affect individuals or where there is large-scale monitoring of a publicly accessible area.

Breach Notification Requirements

A DPIA is the process of systematically considering the potential impact allowing organizations to identify potential privacy issues before they arise and come up with a way to mitigate them. In addition, and a highly important aspect for Security Operation Centers (SOCs) and Computer Security Incident Response Teams (CSIRTs) to be fully aware of and responsive to, data processors must implement an internal breach notification process and inform the supervisory authority of a breach within 72 hours. They must also communicate the breach to affected data subjects without due delay or consequently face a penalty of up to EUR 20,000.00 or 4% of worldwide annual turnover for the preceding financial year, whichever is greater.

Incident Response Processes and Best Practices

As the number of breaches has risen and cyber attacks have become more sophisticated, authorities have recognized a need for increased data protection regulation. The number of simultaneous processes required in a typical forensic or Incident Response Scenario has also grown. Processes need to cover a broad spectrum of technologies and use cases must be standardized, and must perform clearly defined, fully documented actions based upon regulatory requirements, international standards and established best practices.

Additionally, context enrichment and threat analysis capabilities must be integrated to facilitate and automate data breach reporting and notification within the timeframe specified by GDPR. Lastly, customized playbooks must be created to permit rapid response to specific incident types, aid in prioritizing tasks, assignment to individual stakeholders, and to formalize, enforce and measure specific workflows.

Incident Response Management with DFLabs IncMan

Having a platform in place to formalize and support these requirements is crucial. DFLabs IncMan provides all the necessary capabilities to facilitate this. Not only do organizations need an Incident Response plan, they must also have a repeatable and scalable process, as this is one of the steps towards compliance with the GDPR’s accountability principle, requiring that organizations demonstrate the ways in which they comply with data protection principles when transacting business. They must also be able to ensure that they will meet the 72-hour breach notification requirement or face a stiff penalty.

Find out how IncMan can help you become GDPR compliant

Organizations must establish a framework for accountability, as well as a culture of monitoring, reviewing and assessing their data processing procedures to detect, report and investigate any personal data breach. IncMan implements granular and use-case specific incident response procedures with data segregation and critical security control requirements. To enable Incident Response and breach notification in complex organizations and working across different regions, IncMan can be deployed as a multi-tenant solution with granular role-based access.

Cutting Response Time and Accelerating Incident Containment

Automated responses can be executed to save invaluable time and resources and reduce the window from discovery to containment for an incident. Organizations can easily prepare advanced reports from an automatically collected incident and forensic data, and distribute notifications based on granular rules to report a breach and notify affected customers when required to comply with GDPR and avoid a financial penalty.

Finally, the ability to gather and share intelligence from various sources by anonymizing the data to share safely with 3rd party protect the data without inhibiting the investigation. IncMan contains a Knowledge Base module to document playbooks, threat assessment, situational awareness and best practices which could be shared and transferred across the organization.

IncMan and Fulfilling GDPR Requirements

In summary, DFLabs IncMan Security Automation and Orchestration platform fulfills the requirements of GDPR by providing capabilities to automate and prioritize Incident Response through a range of advanced playbooks and runbooks, with related enrichment, containment, and threat analysis tasks. It distributes appropriate notifications and implements an Incident Response plan (IRP) in case of a potential data breach, with formalized, repeatable and enforceable incident response workflows.

IncMan handles different stages of the Incident Response and Breach Notification Process, providing advanced intelligence reporting with appropriate metrics, with the ability to gather or share intelligence with 3rd parties as required.

So, this Valentine’s Day, we hope that you are enjoying a romantic dinner for two, knowing that your SOC and CSIRT, as well as the wider organization, has the necessary incident response and incident management best practices implemented to sufficiently meet the upcoming GDPR requirements in 100 days’ time. If not, speak to one of our representatives to find out more.

Find out how IncMan can help you become GDPR compliant

DFLabs 3rd Party Integrations vs the Market

A consistent feedback point we receive from our users is that their security technology stack is rapidly growing to keep pace with the evolving threat landscape. The days where it was sufficient to deploy a firewall, an intrusion prevention system, antivirus and an identity access management system are long gone. Enterprises are literally spoilt when it comes to selecting a wide variety of different security technologies – User Entity Behaviour Analytics (UEBA), Network Traffic Analysis (NTA), Endpoint Detection and Response (EDR), Breach and Attack Simulation (BAS), are just a few of the emerging technologies available to security teams, and this list does not even include mobile, cloud and IoT offerings that are required to secure the expanding attack surface. This can seem daunting to many organizations, not just the budgetary impact, but also the fact that every one of these technologies requires the expertise and knowledge to effectively operate them.

As a vendor offering a Security Orchestration, Automation, and Response platform that is designed to integrate with, and orchestrate these different solutions, we often have to make difficult choices as to what we integrate with, and how deeply we integrate. Our focus is on market-leading security technologies, technologies we identify as emerging but of growing importance and effectiveness, and of course also based on what our customers have deployed and ask us to integrate.

There is a trend in our market to exaggerate the amount of 3rd party integrations. Marketing collateral often cites hundreds of different 3rd party tools, yet rarely differentiates between the depth of the integration, whether these are truly bidirectional and whether they certified by the 3rd party. As an example, any solution that supports Syslog can claim to support hundreds of 3rd party technologies. It’s an open standard and many solutions can forward syslog message to a syslog collector. But that does not necessarily mean that the solution also has out of the box parsers to normalize the messages, or that there are automation actions, playbooks or report templates available that can parse and use their content.

Reducing this to a pure quantitative marketing message also entirely misses the point. Organization only really care about the technologies they have deployed, or are planning to acquire. The quantity here is entirely misleading. More importantly, users are not stupid. They will see through this charade at the latest when conducting a proof of concept. And at that point any vendor following this approach has some uncomfortable explaining to do. No relationship, personal or business, gets off to a good start based on fudging the truth.

I have rarely seen an RFP that was based purely on a quantitative measure of supported 3rd party integrations, so it is baffling why marketers believe this to have any impact.

At DFLabs we have decided to go a different way. We want to clearly state which of our integrations are bidirectional as opposed to only based on data ingestion, and which integrations are certified, or compatibility tested by our integration partners.

While at first glance this appears to put as at a disadvantage, we trust in the intelligence of our customers. We hope that it will help them to make better-informed decisions and they give us credit for being honest and realistic.

3 Ways to Create Cyber Incidents in DFLabs IncMan

At the heart of incident response, and by extension of Security Automation and Orchestration technologies, resides the Cyber Incident. A typical definition of a cyber security incident is “Any malicious act or suspicious event that compromises or attempts to compromise, or disrupts or tries to disrupt, a critical cyber asset”. Almost everything we do in a SOC or a CSIRT is based on incidents, and there are a variety of potential incident sources, for example:

  1. Alerts from cyber security detection technologies such as Endpoint Detection & Response or User Entity Behavior Analytics tools
  2. Alerts from Security Information & Event Management Systems (SIEM)
  3. Emails from ITSM or case management systems
  4. Website submissions from internal stakeholders and whistle-blowers
  5. Phone calls from internal users and external 3rd parties

This diversity of incident sources means that a solid SAO solution must offer a variety of different methods to create incidents. Regulatory frameworks also frequently mandate being able to originate incidents from different sources. DFLabs IncMan offers a rich set of incident creation options.

There are three primary ways to create incidents in IncMan, offering flexibility to accommodate a variety of incident response process requirements and approaches.

Option 1: Automated Incident Creation

We will feature automated incident creation in a more detail in a future post. In the meantime, I will show you the location of this feature.

Select settings menu, then head to the external sources:

 

cyber incidets incman

 

You will see that under the external sources option there are 3 options available to use as sources to automate incident creation:

  1. Incoming events automation, for CEF/Syslog
  2. Incoming Mail automation, for a monitored email account
  3. Integrations, for all QIC integration components.

Automating incident creation supports a variety of filters to support a rules-based approach. In addition, it is also possible to create incidents using our SOAP API. Certified 3rd party applications use this mechanism to create incidents within IncMan, for example, Splunk.

Option 2: Manual Incident Creation

Click the incidents menu option, then click the + symbol selecting the incidents screen

 

cyber incidets incman 1

 

Fill out all mandatory fields (these can be defined in the custom fields screen) then step through and complete the incident wizard to create the incident:

 

cyber incidets incman 2

 

Once all relevant fields have been completed, click save and this incident will then appear in the incident view and apart of the queue you assigned in the details screen.

Option 3: Incident creation from source

Select an incident source for the incident you want to create, for example, a Syslog or CEF message, an Email, or a Threat intelligence source (STIX/TAXI, ThreatConnect):

 

cyber incidets incman 3

 

In this screen, you can then convert this source item to an incident, or link the source to an existing incident.

Don’t Wait for the Next Breach – Simulate It

Over the past few months during the post-hoc analysis of WannaCry-Petya, we have spoken in great lengths about what should have been done during the incident. This is quite a tricky thing to do in a balanced way because we are all clever in hindsight. What hasn’t been spoken about enough is understanding more generally what we need to do when things go wrong.

This question isn’t as simple as it appears, as there are a lot of aspects to consider during an incident, and only a brief window to identify, contain and mitigate a threat. Let’s look at just a few of these:

Response times
This is often the greatest challenge but of utmost importance. The response is not only understanding the “how” and “why” of a threat but is also about putting the chain of events into action to make sure that the “what” doesn’t spiral out of control.

Creating an effective playbook
A playbook should be a guide on how your incident response plan must be executed. Orchestration platforms contain these playbooks/runbooks. Also, note that these are not generic plug and forget policies. They need to be optimized and mapped to your business and regulatory requirements and are often unique to your organization. Otherwise, the incident will be controlled by an incorrect playbook.

Skills and tool availability
Do you have the correct skills and tools available and are you able to leverage these. Do you understand where your security gaps are and do you know how to mitigate them?

On paper, incident response always works. Right until the moment of truth during a data breach that shows that it doesn’t. To avoid relying on theory only, it is best to run breach simulations and simulate some of the attacks that may affect your organization to find out if your processes and playbooks also work under more realistic conditions.

We’re always playing catch‒up for many reasons—new technologies, new vulnerabilities, and new threats. Software and hardware may possibly always be at the mercy of hackers, criminal actors and other threat actors, so prevention alone is futile. We have to become more resilient and better at dealing with the aftermath of an attack.

The key summary for me is this: How do you respond? Can the response be improved? Utilize the lessons learned in breach simulations to understand how you make the response better than before.

3 Best Practices for Incident Categorization to Support Key Performance Indicators

The DNA sequence for each human is 99.5% similar to any other human. Yet when it comes to incident response and the manner in which individual analysts may interpret the details of a given scenario, our near-total similarity seems to all but vanish. Where one analyst might characterize an incident as the result of a successful social engineering attack, another may instead identify it as a generic malware infection. Similarly, a service outage may be labeled as a denial of service by some, while others will choose to attribute the root cause to an improper procedure carried out by a systems administrator. Root cause and impact, or incident outcome, are just a couple of the many considerations that, unless properly accounted for in a case management process, will otherwise play havoc on a security team’s reporting metrics.

Poor Key Performance Indicators can blind decision makers

What is the impact of poor KPI’s? All too often the end result leads to equally poor strategic decisions. Money and effort may be assigned to the wrong measures, for example into more ineffective prevention controls instead of improved response capability. In a worst case scenario, poor KPI’s can blind decision makers to the most pertinent security issues of their enterprise, and the necessary funding for additional security may be withheld altogether.

Three best practices are required to address this all too common problem of attaining accurate reporting:

  1. A coherent incident management process is necessary in order to properly categorize incident activity. Its definitions must be clear, taking into account outliers, clarifying how root causes and impacts are to be tracked, and providing a workflow to assist analysts in accurately and consistently determining incident categorization.
  2. The process must be enforced to guarantee uniform results in support of coherent KPI’s. Training, quality assurance, and reinforcement are all necessary to ensure total stakeholder buy-in.
  3.  Security teams must have the technologies to support effective incident response and proper categorization of incidents.

There are several ways that the IncMan platform supports the three best practices:

First, IncMan provides a platform to act as the foundation for an incident management program. It provides customizable incident forms allowing for complete tailoring to an organization and the details it must collect in support of its unique reporting requirements. Custom fields specific to distinct incident types allow for detailed data collection and categorization. These custom fields can be coupled with common attributes to track specific data, thereby providing a high level of flexibility for security teams in maintaining absolute reporting consistency across the team’s individual members.

Next, playbooks can be associated with specific incident types, providing step-by-step instructions for specialized incident response activities. Playbooks enforce consistency and can further reinforce reporting requirements. However, playbooks are not completely static, and while they certainly provide structure, IncMan’s playbooks also offer the ability to improvise, add, remove or substitute actions on the fly.

The platform’s Knowledge Base offers a repository for reference material to further supplement playbook instructions. Information collection requirements defined within playbook steps can be linked to Knowledge Base references, arming analysts with added information, for example with standard operating procedures pertaining to individual enterprise security tools, or checklists for applicable industry reporting requirements.

IncMan also includes Automated Responder Knowledge (ARK), a machine learning driven approach that learns from past incidents and the response to them, to suggest suitable playbooks for new or related incident types. This is not only useful for helping to identify specific campaigns and otherwise connected incident activity but can also highlight historical cases that can serve as examples for new or novice analysts.

Finally, the platform’s API and KPI export capabilities enable the extraction of raw incident data, allowing for data mining of valuable reporting information using external analytics tools. This information can then be used to paint a much clearer picture of an enterprise’s security posture and allow for fully-informed strategic decision-making.

Collectively, the IncMan features detailed above empower an organization with the means to support consistency in incident categorization, response, and reporting. For more information, please visit us at https://www.dflabs.com

Slaying the Hydra – Incident Response and Advanced Targeted Attacks

In incident response, protecting against a targeted attack is like slaying the hydra. For those not familiar with what a hydra is, it is a multi-headed serpent from Greek mythology, that grows two new heads for every head you chop off. A determined attacker will try again and again until they succeed, targeting different attack vectors and using a variety of tactics, techniques, and procedures.

The Snowden and Shadowbroker leaks really drove this home, giving partial insight into the toolkit of nation state actors. What really stuck out to me was the sheer variety of utilities, frameworks, and techniques to infiltrate and gain persistence in a target. Without the leak, would it be possible to reliably determine that all of those hacking tools belonged to a single entity? Would a large organization with thousands of alerts and hundreds of incidents every day be able to identify that these different attacks belonged to a single, concerted effort to breach their defenses, or would they come to the conclusion that these were all separate, unrelated attempts?

Our colleagues in the Threat Intelligence and Forensic analysis industries have a much better chance to correlate these tools and their footprint in the wild – they may discover that some of these tools share a command and control infrastructure for example. A few did have at least an outline of the threat actor, but judging by the spate of advisories and reports that were released after the leaks, not very many actually appear to have achieved this to a great degree. The majority were only able to piece the puzzle together once equipped with a concise list of Indicators of Compromise (IoC) and TTP’s to begin hunting with.

“How does this affect me? We are not important enough to attract the attention of a nation state actor”

Some readers may now be thinking, “How does this affect me? We are not important enough to attract the attention of a nation state actor”. I would urge caution in placing too much faith in that belief.

On the one hand, for businesses in some countries the risk of economic espionage by-nation state hacking has decreased. As I wrote on Securityweek in July, China has signed agreements with the USA, Canada, Australia, Germany and the UK limiting hacking for the purpose of stealing trade secrets and economic espionage. However, this does not affect hacking for national security purposes, and it will have little impact on privately conducted hacking. These are also bilateral agreements, and none exist in other nations, for example, Russia or North Korea. For militarily and economically weaker nation states, offensive cyber security is a cheap, asymmetric method of gaining a competitive or strategic advantage. As we have seen, offensive cyber activity can target civilian entities for political rather than economic reasons, and hackers are increasingly targeting the weakest link in the supply chain. This means that the potential probability of being targeted is today based more on your customer, partner, and supply chain network, and not just on what your organization does in detail. Security through obscurity has never been a true replacement for actual security, but it has lost its effectiveness as targeted attacks have moved beyond only focusing on the most prominent and obvious victims. It has become much easier to suffer from collateral damage.

Cyber criminals are becoming more organized and professional

On the other hand, cyber criminals are becoming more organized and professional, with individual threat actors selling their services to a wide customer base. A single small group of hackers like LulzSec may have a limited toolbox and selection of TTP’s, but professional cybercrime groups have access to numerous hackers, supporting services and purpose-built solutions. If they are targeting an organization directly and are persistent and not opportunistic, it will be as difficult to discern that a single concerted attack by one determined threat actor is taking place.

What this means in practical reality for any organization that may become the target of a sophisticated threat actor, is that you have to be on constant alert. Identifying, responding to and containing a threat is not a process to be stepped through with a final resolution step – instead, cyber security incident response is an ongoing, continuous and cyclical process. Advanced and persistent attacks unfold in stages and waves, and like a war consist of a series of skirmishes and battles that continue until one side loses the will to carry on the conflict or succeeds in their objectives. Like trying to slay the hydra, each incident that you resolve means that the attacker will change their approach and that the next attempt may be more difficult to spot. Two new heads have grown instead of one.

To tackle this requires that we cultivate a perpetual state of alertness in our SOC and CSIRT

To tackle this requires that we cultivate a perpetual state of alertness in our SOC and CSIRT – but we must do this without creating a perpetual state of alarm. The former means that your team of analysts is always aware and alert, looking at individual incidents as potentially just one hostile act of many that together could constitute a concerted effort to exfiltrate your most valuable data, disrupt your operational capacity, or abuse your organization to do this to your partners or customers. In the latter case, your analysts will suffer from alert fatigue, a lack of true visibility of threats, and a lack of energy and time to be able to see the bigger picture.
The hydra will have too many heads to defeat.

In the Greek legend of Heracles, the titular hero eventually defeats the Hydra by cauterizing each decapitated stump with fire to prevent any new heads from forming. Treating an incident in isolation is the Security Incident Response equivalent of chopping off the head of the hydra without burning the stump. Applied to our problem, burning the stump means that we have to conduct the response to each incident thoroughly and effectively, and continue the process well beyond containment.

We must invest more time in hunting and investigating, and we have to correlate and analyze the relationship between disparate incidents. We must use threat intelligence more strategically to derive situational awareness, and not just tactically as a machine-readable list of IoC’s. This also requires gathering sufficient forensic evidence and context data about an incident and related assets and entities during the incident response process, so that we can conduct post event analysis and continuous threat assessment after containment and mitigation have been carried out. This way we can better anticipate the level of threat that we are exposed to, and make more informed decisions about where to focus our resources, add mitigating controls and improve our defenses. In Incident Response “burning the stump” means making it more difficult for threat actors to succeed in the future by presenting them with a hardened attack surface, reducing their reside time in our infrastructure, and reducing the time we need to discover and contain them. To do this we need to learn from every incident we manage.

Visual Event Correlation Is Critical in Cyber Incident Associational Analysis

I can remember sometime around late 2001 or early 2002, GREPing Snort logs for that needle in a haystack until I thought I was going to go blind. I further recall around the same time cheering the release of the Analysis Console for Intrusion Databases (ACID) tool which helped to organize the information into something that I could start using to correlate events by way of analysis of traffic patterns.

Skip ahead and the issues we faced while correlating data subtly changed from a one-off analysis to a lack of standardization for the alert formats that were available in the EDR marketplace. Each vendor was producing significant amounts of what was arguably critical information, but unfortunately all in their own proprietary format. This rendered log analysis and information tools constantly behind the 8-ball when trying to ingest all of these critical pieces of disparate event information.

We have since evolved to the point that log file information sharing can be easily facilitated through a number of industry standards, i.e., RFC 6872. Unfortunately, with the advent of the Internet of Things (IoT), we have also created new challenges that must be addressed in order to make the most effective use of data during event correlation. Specifically, how do we quickly correlate and review:

a. Large amounts of data;

b. Data delivered from a number of different resources (IoT);

c. Data which may be trickling in over an extended period of time and,

d. Data segments that, when evaluated separately, will not give insight into the “Big Picture”

How can we now ingest these large amounts of data from disparate devices and rapidly draw conclusions that allow us to make educated decisions during the incident response life cycle? I can envision success coming through the intersection of 4 coordinated activities, all facilitated through event automation:

1. Event filtering – This consists of discarding events that are deemed to be irrelevant by the event correlator. This is also important when we seek to avoid alarm fatigue due to a proliferation of nuisance alarms.

2. Event aggregation – This is a technique where a collection of many similar events (not necessarily identical) are combined into an aggregate that represents the underlying event data.

3. Event Masking – This consists of ignoring events pertaining to systems that are downstream of a failed system.

4. Root cause analysis – This is the last and quite possibly the most complex step of event correlation. Through root cause analysis, we can visualize data juxtapositions to identify similarities or matches between events to detect, determine whether some events can be explained by others, or identify causational factors between security events.

The results of these 4 event activities will promote the identification and correlation of similar cyber security incidents, events and epidemiologies.

According to psychology experts, up to 90% of information is transmitted to the human brain visually. Taking that into consideration, when we are seeking to construct an associational link between large amounts of data we, therefore, must be able to process the information utilizing a visual model. DFLabs IncMan™ provides a feature rich correlation engine that is able to extrapolate information from cyber incidents in order to present the analyst with a contextualized representation of current and historical cyber incident data.

As we can see from the correlation graph above, IncMan has helped simplify and speed up a comprehensive response to identifying the original infection point of entry into the network and then visual representing the network nodes that were subsequently affected, denoted by their associational links.

The ability to ingest large amounts of data and conduct associational link analysis and correlation, while critical, does not have to be overly complicated, provided of course that you have the right tools. If you’re interested in seeing additional capabilities available to simplify your cyber incident response processes, please contact us for a demo at [email protected]

A Weekend in Incident Response #12: How to Create Cyber Incident Recovery Playbooks in Line with New NIST Guide

When it comes to protecting your organization against cyber incidents, you can never be too careful. The methods and techniques employed by cyber criminals are becoming increasingly sophisticated with each passing day, requiring you to adapt and improve your cyber defense accordingly. One of the most important aspects of any type of protection against cyber attacks is the way you respond to and recover from current and past cybersecurity events. Cyber incident recovery playbooks as an integral part of an organization’s incident response strategy can go a long way toward reducing reaction times and restoring operations as soon as possible following an attack.

In this regard, it can be said that cybersecurity incident response platforms are necessary for every organization that needs to protect information and other assets that could be potential targets of cyber criminals. These types of platforms help businesses and government agencies stave off cyber attacks and recover from data breaches, and their usage is in line with recommendations by the United States National Institute of Standards and Technology (NIST). To make it easier for organizations to recover from various cybersecurity incidents as quickly as possible, the NIST constantly issues new and updated guidelines that represent a good foundation that organizations can rely on while developing their cyber incident response plans. The latest guide introduced by the NIST focuses on what organizations can do to make their recovery procedures and processes more effective and less time-consuming.

Efficient Risk Management

The Guide for Cybersecurity Event Recovery encompasses wide-ranging tips on how to create a best practices plan for making an organization’s system fully operational following a breach. One of the key points addressed in this guide is the fact that recovery is a crucial aspect of the broader risk management efforts within an organization, stressing that there are various solutions for bringing a system back online, but no matter the severity of the breach that brought the system down, every organization needs to be prepared to respond to these events in advance. To do that, organizations are advised to adopt detailed plans and cyber incident recovery playbooks for various types of cybersecurity incidents, so that they can reduce their reaction time and minimize the damage in the event of a data breach.

Playbooks are a central key to the Recovery Processes and Procedures

When it comes to recovery, the NIST guide basically states that every organization needs to focus on the development of recovery processes and procedures that are centered around playbooks, which would allow them to respond to different types of breaches in the most effective way.

Automated playbooks are considered to be a crucial tool for a successful recovery operation. Using a platform providing automated cyber security incident recovery playbooks increases the level of preparedness of your organization to quickly respond to cybersecurity events and recover from data breaches, ransomware, and other incidents. The guide advises recovery teams within each organization to run the plays with table top exercises so that they can be constantly aware of all potential risk scenarios and detect potential gaps in their response plans.

In addition to playbooks, the guide highlights the aspect of documenting current and past cybersecurity incidents as another important factor for improving an organization’s recovery capabilities. To that end, organizations should utilize a platform that includes automated playbooks and has the ability to track digital evidence and analyze the causes of cybersecurity incidents. Followed by an automated creation of extensive and detailed incident reports. A platform of this type is the best solution for a comprehensive cybersecurity incident protection, encompassing identification, detection, response, and recovery.