100-Day Countdown to GDPR

For many of us around the world February 14th marks St. Valentine’s Day, but for those of us in Europe, this date also marks the beginning of the 100-day countdown to the upcoming enforcement of the General Data Protection Regulation (GDPR).

As most of us are already aware the EU GDPR was adopted in April 2016 and is due to be formally imposed on May 25th, 2018. In a nutshell for those who are not quite so GDPR savvy, the GDPR emphasizes transparency, security, and accountability by data controllers and introduced mandatory Data Protection Impact Assessments (DPIAs) for those organizations involved in high-risk processing. For example, where a new technology is being deployed, where a profiling operation is likely to significantly affect individuals or where there is large-scale monitoring of a publicly accessible area.

Breach Notification Requirements

A DPIA is the process of systematically considering the potential impact allowing organizations to identify potential privacy issues before they arise and come up with a way to mitigate them. In addition, and a highly important aspect for Security Operation Centers (SOCs) and Computer Security Incident Response Teams (CSIRTs) to be fully aware of and responsive to, data processors must implement an internal breach notification process and inform the supervisory authority of a breach within 72 hours. They must also communicate the breach to affected data subjects without due delay or consequently face a penalty of up to EUR 20,000.00 or 4% of worldwide annual turnover for the preceding financial year, whichever is greater.

Incident Response Processes and Best Practices

As the number of breaches has risen and cyber attacks have become more sophisticated, authorities have recognized a need for increased data protection regulation. The number of simultaneous processes required in a typical forensic or Incident Response Scenario has also grown. Processes need to cover a broad spectrum of technologies and use cases must be standardized, and must perform clearly defined, fully documented actions based upon regulatory requirements, international standards and established best practices.

Additionally, context enrichment and threat analysis capabilities must be integrated to facilitate and automate data breach reporting and notification within the timeframe specified by GDPR. Lastly, customized playbooks must be created to permit rapid response to specific incident types, aid in prioritizing tasks, assignment to individual stakeholders, and to formalize, enforce and measure specific workflows.

Incident Response Management with DFLabs IncMan

Having a platform in place to formalize and support these requirements is crucial. DFLabs IncMan provides all the necessary capabilities to facilitate this. Not only do organizations need an Incident Response plan, they must also have a repeatable and scalable process, as this is one of the steps towards compliance with the GDPR’s accountability principle, requiring that organizations demonstrate the ways in which they comply with data protection principles when transacting business. They must also be able to ensure that they will meet the 72-hour breach notification requirement or face a stiff penalty.

Find out how IncMan can help you become GDPR compliant

Organizations must establish a framework for accountability, as well as a culture of monitoring, reviewing and assessing their data processing procedures to detect, report and investigate any personal data breach. IncMan implements granular and use-case specific incident response procedures with data segregation and critical security control requirements. To enable Incident Response and breach notification in complex organizations and working across different regions, IncMan can be deployed as a multi-tenant solution with granular role-based access.

Cutting Response Time and Accelerating Incident Containment

Automated responses can be executed to save invaluable time and resources and reduce the window from discovery to containment for an incident. Organizations can easily prepare advanced reports from an automatically collected incident and forensic data, and distribute notifications based on granular rules to report a breach and notify affected customers when required to comply with GDPR and avoid a financial penalty.

Finally, the ability to gather and share intelligence from various sources by anonymizing the data to share safely with 3rd party protect the data without inhibiting the investigation. IncMan contains a Knowledge Base module to document playbooks, threat assessment, situational awareness and best practices which could be shared and transferred across the organization.

IncMan and Fulfilling GDPR Requirements

In summary, DFLabs IncMan Security Automation and Orchestration platform fulfills the requirements of GDPR by providing capabilities to automate and prioritize Incident Response through a range of advanced playbooks and runbooks, with related enrichment, containment, and threat analysis tasks. It distributes appropriate notifications and implements an Incident Response plan (IRP) in case of a potential data breach, with formalized, repeatable and enforceable incident response workflows.

IncMan handles different stages of the Incident Response and Breach Notification Process, providing advanced intelligence reporting with appropriate metrics, with the ability to gather or share intelligence with 3rd parties as required.

So, this Valentine’s Day, we hope that you are enjoying a romantic dinner for two, knowing that your SOC and CSIRT, as well as the wider organization, has the necessary incident response and incident management best practices implemented to sufficiently meet the upcoming GDPR requirements in 100 days’ time. If not, speak to one of our representatives to find out more.

Find out how IncMan can help you become GDPR compliant

Overcoming the Tower of Babel in Your Cybersecurity Program

Best practices for communicating cybersecurity risks and efficiency

One of the most difficult challenges encountered within risk management in today’s ever-changing cybersecurity environment is the ability to communicate the risks posed to an organization effectively. Security executives expect communication to be in their own language, focusing on the financial implications regarding gain, loss, and risk, and the difficulty of translating traditional security terms and nomenclature into risk statements expected by business executives poses a serious challenge. Therefore, it is the responsibility of a cybersecurity professional to ensure that security risks are communicated to all levels of the organization using language that can be easily understood.

The communication of security metrics plays a crucial role in ensuring the effectiveness of a cybersecurity program. When disseminating information on cyber risks, several aspects of communication should be considered. For example, a security professional should be cognizant of the credibility of the information’s source, the targeted audience and how to place the risk into perspective. We firmly believe that the success of a business today is directly related to the success of its cybersecurity program. This is largely due to the fact that all organizations depend on technology. Specifically, the interconnectedness of digital technologies translates to a significant potential for damage to an organization’s operational integrity and brand credibility, if its digital assets are not meticulously safeguarded. We only need to look at the recent Equifax breach for an illustrative example of this. Considering the potential impact of cyber attacks and data breaches, organizations must improve how they communicate cybersecurity risk.

The first step to ensuring effective communication of cyber risks involves a comprehensive business impact assessment. This must consider the organization’s business goals and objectives. Business impact assessments focus on how the loss of critical data and operational integrity of core services and infrastructure will impact a business. Furthermore, it acts as a basis for evaluating business continuity and disaster recovery strategies.

The second step is the identification of key stakeholders and their responsibilities. According to experts, this step plays a significant role in being prepared to mitigate the impact of cyber risks. Stakeholders are directly affected by a breach and have the most skin in the game. Identifying stakeholders should not be a one-off exercise but must be conducted regularly. An important consideration is that the more stakeholders there are, the greater the scope for miscommunication. Failure to identify the responsible stakeholders will increase the probability that risk is miscommunicated. In the case of a breach, it means that the response will be ineffective.

The third and most critical step is the identification of Key Risk Indicators (KRIs) tied to your program’s Key Performance Indicators (KPIs). Doing this correctly will mean communicating cyber risks to executives in a way that allows them to make informed decisions. As an example, the amount or the severity of vulnerabilities on a critical system is meaningless to non-technical executives. Stating that a critical system that processes credit card data is vulnerable to data loss is more meaningful. Once business impacts have been assessed, stakeholders have been identified, and meaningful security metrics have been determined, regular communication to various stakeholders can take place.

Different stakeholders have unique needs. This must be considered when communicating KRIs and KPIs. When delivering information, we must accommodate both the stakeholders that prefer summaries and those that prefer reviewing data to make their conclusions. DFLabs’ IncMan generates customizable KPI and incident reports designed to cater to both audiences. Cybersecurity program metrics1 must also focus on costs in time and money to fulfill business needs. The ability to track these metrics is a key differentiator for DFLabs IncMan.

DFLabs’ IncMan is designed to not only provide the best in class incident orchestration and response capabilities but also provides the ability to generate customizable KPI reports that accurately reflect up-to-the-minute metrics on the health of your cybersecurity infrastructure. If your organization needs to get a true, customizable view that incorporates all stakeholders please contact us at [email protected] for a free, no-obligation demonstration of how we can truly keep your cyber incidents under control.

The Overlooked Importance of Incident Management

Whether you call it Incident Management or Incident Handling, most will agree that there is a distinct difference between responding to an incident and managing an incident. Put simply, Incident Response can be defined as the “doing”, while Incident Management can be defined as the “orchestrating”. Proper Incident Management is the foundation and structure upon which a successful Incident Response program must be based. There are numerous blogs, articles and papers addressing various aspects of the differences between Incident Response and Incident Management dating back to at least a decade. Why add another to the top of the pile? Because while most organizations now see the value in putting people, tools, and basic processes in place to respond to the inevitable incident, many still do not take the time to develop a solid Incident Management process to orchestrate the response effort.

Security incidents create a unique environment, highly dynamic and often stressful, and outside the comfort zone of many of those who may be involved in the response process. This is especially true during complex incidents where ancillary team members, such as those from Human Resources, Legal, Compliance or Executive Management, may become involved. These ancillary team members are often accustomed to working in a more structured environment and have had very little previous exposure to the Incident Response process, making Incident Management an even more critical function. Although often overlooked, the lack of effective Incident Management will invariably result in a less efficient and effective process, leading to increased financial and reputational damage from an incident.

Many day-to-day management processes do not adapt well to these complex challenges. For example, as the size and complexity of a security incident increases, the number of people that a single manager can directly supervise effectively decreases. It is also not uncommon for some employees to report to more than one supervisor. During a security incident, this can lead to mixed directives and confusion. During a security incident, it is critical that information flows quickly and smoothly both vertically and horizontally. Many organization’s existing communication methods do not adapt well to this.

When an ad-hoc Incident Management system is used, the response process becomes much less consistent and effective. A common pitfall of this ad-hoc management style is that it can create a flat management structure, forcing the Incident Response Coordinator to directly oversee the functions of many groups with vastly different objectives. A flat structure such as this also tends to inhibit the flow of information between the individual groups.

 

incident management 1

 

Another common pitfall of this ad-hoc management style is that it often results in a fragmented and disorganized process. Without proper management to provide clear objectives and expectations, it is easy for individual groups to create their own objectives based on what they believe to be the priority. This seriously limits the effective communication between individual groups, forcing each to work with incomplete or incorrect information.

 

incident management 2

 

There are numerous ways in which the Incident Management process can be streamlined. On Wednesday, January 31st, DFLabs will be releasing a new whitepaper titled “Increasing the Effectiveness of Incident Management”, discussing the lessons that can be learned from decades of trial and error in another profession, the fire service, to improve the effectiveness of the Incident Management process. John Moran, Sr. Product Manager at DFLabs, will also be joining Paul and the Enterprise Security Weekly Team on their podcast at 1 PM EST on January 31st to discuss some of these lessons in more detail. Stay tuned to the DFLabs website, or listen in on the podcast on January 31st for more details!

Download the “Increasing the Effectiveness of Incident Management” whitepaper here

DFLabs 3rd Party Integrations vs the Market

A consistent feedback point we receive from our users is that their security technology stack is rapidly growing to keep pace with the evolving threat landscape. The days where it was sufficient to deploy a firewall, an intrusion prevention system, antivirus and an identity access management system are long gone. Enterprises are literally spoilt when it comes to selecting a wide variety of different security technologies – User Entity Behaviour Analytics (UEBA), Network Traffic Analysis (NTA), Endpoint Detection and Response (EDR), Breach and Attack Simulation (BAS), are just a few of the emerging technologies available to security teams, and this list does not even include mobile, cloud and IoT offerings that are required to secure the expanding attack surface. This can seem daunting to many organizations, not just the budgetary impact, but also the fact that every one of these technologies requires the expertise and knowledge to effectively operate them.

As a vendor offering a Security Orchestration, Automation, and Response platform that is designed to integrate with, and orchestrate these different solutions, we often have to make difficult choices as to what we integrate with, and how deeply we integrate. Our focus is on market-leading security technologies, technologies we identify as emerging but of growing importance and effectiveness, and of course also based on what our customers have deployed and ask us to integrate.

There is a trend in our market to exaggerate the amount of 3rd party integrations. Marketing collateral often cites hundreds of different 3rd party tools, yet rarely differentiates between the depth of the integration, whether these are truly bidirectional and whether they certified by the 3rd party. As an example, any solution that supports Syslog can claim to support hundreds of 3rd party technologies. It’s an open standard and many solutions can forward syslog message to a syslog collector. But that does not necessarily mean that the solution also has out of the box parsers to normalize the messages, or that there are automation actions, playbooks or report templates available that can parse and use their content.

Reducing this to a pure quantitative marketing message also entirely misses the point. Organization only really care about the technologies they have deployed, or are planning to acquire. The quantity here is entirely misleading. More importantly, users are not stupid. They will see through this charade at the latest when conducting a proof of concept. And at that point any vendor following this approach has some uncomfortable explaining to do. No relationship, personal or business, gets off to a good start based on fudging the truth.

I have rarely seen an RFP that was based purely on a quantitative measure of supported 3rd party integrations, so it is baffling why marketers believe this to have any impact.

At DFLabs we have decided to go a different way. We want to clearly state which of our integrations are bidirectional as opposed to only based on data ingestion, and which integrations are certified, or compatibility tested by our integration partners.

While at first glance this appears to put as at a disadvantage, we trust in the intelligence of our customers. We hope that it will help them to make better-informed decisions and they give us credit for being honest and realistic.

Meltdown and Spectre – What They Mean to the Enterprise

Since Meltdown and Spectre were publicly disclosed earlier this month, there has been much confusion surrounding exactly how these attacks work, and what impact they may have on the enterprise. Though these vulnerabilities could pose a serious risk to the enterprise, I think we can probably hold short of a full-fledged panic, at least for now. Let’s take a look at what these two attacks really are, what solutions are available, and what they may ultimately mean for the enterprise.

Meltdown and Spectre exploit critical flaws in modern processors. Unlike most vulnerabilities, the vulnerabilities used by Meltdown and Spectre do not exist in software, but in the processors themselves. This is significant for many reasons. First, while it is possible to patch “around” these vulnerabilities, it is much more difficult to patch the vulnerabilities themselves. Second, although endpoint protection solutions may be able to detect methods used to deliver a payload which exploits the Meltdown or Spectre vulnerabilities, the execution of the exploits themselves are very hard to distinguish from normal processes and thus very difficult to detect heuristically. Finally, since these exploits are performed at the hardware level, the forensic artifacts often relied upon to investigate malware incidents are absent in these attacks.

While programs are typically not permitted to read data from memory sections outside their own address space, a malicious program can exploit Meltdown or Spectre to access regions of memory that the process should not otherwise have access to. This could include access to user credentials or hashes, private keys, sandbox escape, VM escape and access to other confidential information stored in memory.

So far, there are three specific vulnerabilities which are used in Meltdown or Spectre attacks:

• Variant 1: Bounds check bypass (CVE-2017-5753)
• Variant 2: Branch target injection (CVE-2017-5715)
• Variant 3: Rogue data cache load (CVE-2017-5754)

Spectre (Variants 1 and 2) has been shown to impact all modern chips manufactured by Intel, AMD and ARM. Meltdown (Variant 3) has been shown to impact most Intel chips since 2010, although researchers noted that with additional research and code optimization, exploitation against AMD and ARM chips may be possible.

Meltdown

A processor operates in two modes: kernel mode and user mode. Kernel mode is where the operating system executes, and processes running in kernel mode have access to hardware and all regions of memory. User mode is where most user processes execute. Each user mode process has a restricted view of memory, allowing access only to the process’s own virtual address space, and must use system calls to access hardware. Under normal conditions, if a user mode process attempts to directly access a memory section outside its own virtual address space, an exception will be generated which will cause the process’s thread to terminate.

While kernel memory cannot be read directly from user space without utilizing a system call, most modern operating systems still map kernel addresses into userspace processes to optimize performance. Although this sounds like a vulnerability on its own, a userspace process attempting to access kernel space outside of a normal system call will generate an exception and cause the process’s thread to terminate. This means that although kernel space is mapped into the user process address space, it is effectively inaccessible to the user space process outside of normal system calls.

Meltdown exploits the fact that kernel space memory is mapped into the user space process’s address space and the fact that, to optimize performance, most modern processors execute instructions out of order, to bypass the normal protection mechanisms and allow unprivileged user space processes to access kernel space memory. By executing specially crafted out of order instructions, kernel space memory address can be written to the process’s address space, which can then be retrieved via an address reference in the processor’s cache. Although this unprivileged memory access ultimately causes an exception which causes the rest of the out of order instructions to be discarded, the retrieved memory sections continue to reside in the user process’s unmapped memory space, and the reference still remains in the processors’ cache.

Although Address Space Layout Randomization (ASLR) means that an attacker will not know the location of kernel space memory beforehand, this presents more of a stumbling block than a barrier for this attack. The researchers who discovered Meltdown found that the location of kernel memory can be determined in about 128 guesses on a system containing 8GBs of memory, making ALSR trivial in preventing this attack.

Spectre

Spectre exploits another set of processor features, branch prediction, and speculative execution, to access protected memory locations. Most processes contain a vast array of branch conditions, such as If/Then/Else statements. To optimize performance, as a process executes, the processor keeps track of which branches are executed most often. This allows the processor to “think ahead”, read memory sections and snapshot pre-execution of the branch which it believes is most likely to be taken. This is referred to as speculative execution.

Spectre causes the processor to access a memory location of the attacker’s choosing within the user process’s space during branch prediction and speculative execution which would not be accessed during normal process execution. Although these memory locations are never actually executed by the processor, they now exist in the processor’s cache and can then be read by the attacker.

Patches and Workarounds

There are patches currently available for the Linux kernel to prevent the Meltdown attack. Known as KAISER or Kernel Page Table Isolation (KPTI), these patches prevent most kernel memory from being mapped into userspace processes (although some kernel memory, such as the IDT, must still be mapped to user mode address space). Microsoft has also developed a similar patch for Windows to address Meltdown; however, due to compatibility issues (BSODs) with some endpoint protection solutions, Microsoft has cautioned users to check with their endpoint protection provider prior to deploying the patch. An unofficial list of support for the Windows patch by common endpoint protection solutions is available here. It is important to note that these patches do not fix the actual vulnerability itself, they simply limit the practical exploitation of the vulnerability. Some users have also been reporting issues with the Microsoft patch, causing systems with older AMD processors to fail to boot correctly.
For users still running legacy systems, such as Windows XP and Windows Server 2003, these systems will continue to remain vulnerable to Meltdown and Spectre along with the many other vulnerabilities still present on these systems. The best protection is to harden around these systems, or even better, replace them.

Patches to address these vulnerabilities do come with some performance implications, however. Estimates range from a 5% to a 30% decrease in performance depending on system usage. Systems utilized for more common tasks, such as email, Internet browsing, and word processing, are less likely to see a major performance impact from these patches. However, systems used for tasks which require more intensive hardware utilization are more likely to suffer a noticeable impact.

While KPTI and similar patches are also effective against Spectre Variant 2, protection from Spectre Variant 1 is largely dependent on patches from individual software vendors. Software such as web browsers, or sandboxes and virtualization technologies are likely the most attractive target for Spectre attacks since it would allow attackers to access information from other browser tabs or escape the sandbox or VM. The major web browsers either already have patches available, or are planning to deploy patches in the upcoming weeks. Users should ensure that they are running the most up-to-date versions of their web browsers. Sandbox and virtualization users, such as those using JavaScript, Xen or Docker, should check with their respective vendors to see when patches will be available and ensure that they are running the most current versions of the software.

Apple, which bases its iOS A-series processors on ARM’s instruction set, has already deployed patches for some devices and plans to release patches for other devices soon. Apple has advised users are to check for new updates for their devices and check to ensure that they are running the most current operating system for their devices.

Google says that Android devices, which also most commonly run on ARM processors, should be protected if they have the latest security update installed. Currently, Android devices are protected by restricting access to the high precision timers needed to determine if a cache hit or miss has occurred; however future Android security updates will also include additional mitigations based on KPTI. Users of Google Android devices, such as the Nexus and Pixel, should have immediate access to the security updates. Users of other Android devices will likely have to wait a little longer for these patches as they are tested and pushed through each manufacturer’s update process.

Chromebook users running the latest version of Chrome OS (version 63 as of now) should already be protected against these vulnerabilities as well. To check if a Chromebook is updated to version 63, or if an update is available, users should check Google’s list of Chrome OS devices.

Qualcomm processors are affected by these vulnerabilities as well, although patches have not yet been released for all systems running on Qualcomm processors. Users are advised to check for operating system updates, particularly when running Android and Linux, on their Qualcomm-powered devices. IBM firmware updates should be available in the upcoming weeks for their Power CPUs to address Spectre-like issues in its design.

Cisco is also preparing to release patches to prevent exploitation of these vulnerabilities. Since most Cisco products are closed systems, which do not allow customers to run custom code on the device, practical exploitation of these vulnerabilities on Cisco devices is limited. However, certain processor and OS combinations in some Cisco products could leave them vulnerable, and it is recommended that users patch their Cisco devices as soon as a patch is available.

What does this mean for me?

While Meltdown and Spectre do pose some serious potential security risks, at the moment there is no need to panic. Users are still far more likely to be exploited via a phishing email than either of these vulnerabilities. So far, there has been no evidence that either of these attacks has been used in the wild, although it is certainly possible that others have known about these vulnerabilities for some time. It will likely take some additional time and effort to advance these attacks from a proof of concept to a reliable attack vector.

There has been enough attention given to these vulnerabilities that most major operating systems and applications already have patches available or will have patches available soon. Patching will be the best defense against these attacks. With proper patching, the risks from these vulnerabilities will be significantly reduced or eliminated. Users of less common or no longer maintained software which may be vulnerable to Spectre Variant 1 should check with their software vendors or deploy additional protections around these systems. Legacy and embedded devices, with limited or no ability to patch, will likely see the greatest long-term risk associated with these attacks.

The greatest potential impact from Meltdown could have been on cloud service providers, as they could have allowed both access to the hypervisor as well as data from other instances. However, most of the major cloud service providers were notified of the vulnerability prior to public disclosure and should have patches already deployed, or being deployed soon. Cloud service users should check with their individual providers to determine their potential existing exposure from these vulnerabilities.

The most significant lasting impact from Meltdown and Spectre may be these types of hardware vulnerabilities exist in the first place. Although these vulnerabilities have been present in hardware for many years, little attention has been paid to this class of vulnerabilities. Hardware vulnerabilities such as these present a very attractive vector for attackers due to the high level of access they provide and the potential for VM escape; even more so when they can be exploited remotely. We have seen this type of vulnerability “gold rush” in the past, where the discovery of a single vulnerability leads to increase in scrutiny of a certain application or system, disclosing dozens of additional vulnerabilities. Given the challenges in detecting, investigating and patching these types of hardware vulnerabilities, a gold rush here could have serious security implications.

R3 Rapid Response Runbook for Spear Phishing

According to Verizon’s Data Breach Investigations report 2017, social engineering was a factor in 43% of breaches, with Phishing accounting for 93% of social attacks.

DFlabs has worked closely with our customers to draft and deploy Phishing specific runbooks. In this article, we take a look at an example R3 Phishing runbook below.

The Premise
Our premise is that an incident appears to be a Spear Phishing attempt has been forwarded to the SOC. The SOC team must qualify the incident and determine what needs to be done to mitigate the attack.

We begin our investigation with an incident observable, a fully qualified domain name (FQDN).

We will correlate the FQDN with several external threat intelligence services to assess whether this is truly an ongoing Phishing attempt or a benign false positive. We have used VirusTotal and Cisco Umbrella in this example, but other threat intelligence and malware services could be used instead.

We have 3 different potential outcomes and associated decision paths:

R3 Rapid Response Runbook for Spear Phishing 7


The R3 Runbook

1. The FQDN is automatically extracted from the incident alert and then sent to Cisco Umbrella Investigate for a classification.

R3 Rapid Response Runbook for Spear Phishing 1

 

2. Depending on the outcome – whether Cisco Umbrella Investigate classifies the FQDN as benign or malicious – we can take one of two different paths.

R3 Rapid Response Runbook for Spear Phishing 2

 

3. The FQDN will be rechecked with VirusTotal to verify the result. We do this whether the first classification was malicious or benign. At this point we do not know whether one of the two services is returning a false positive or a false negative, so we do a double check.

R3 Rapid Response Runbook for Spear Phishing 3

 

4. IF both external 3rd party queries confirm that the FQDN is malicious, we have a high degree of certainty that this is a harmful Phishing attempt and can step through automatically to containment. In our example, we automatically block the domain on a web gateway.

R3 Rapid Response Runbook for Spear Phishing 4

 

5. Alternatively, if only one of the two queries returns a malicious classification, we need to hand the runbook off to a security analyst to conduct a manual investigation. At this point, we cannot determine in an automated manner where the misclassification resides. It could be that one of the services has stale data, or doesn’t include the FQDN in its database. With the ambiguous result, we lack the degree of confidence in the detection to trust executing fully automated containment.

R3 Rapid Response Runbook for Spear Phishing 5

 

6. If both VirusTotal and Cisco Umbrella Investigate return a non-malicious classification, no further action will be necessary at this point. We will notify the relevant users that the incident has been resolved as a false positive and can close the case for now.

R3 Rapid Response Runbook for Spear Phishing 6

 

This R3 Phishing Runbook demonstrates the flexibility and efficiency of automating incident response . Incident Qualification is automated as much as is feasible but keeps a human in the loop when cognitive skills are required. It only automates containment when the degree of confidence is sufficient. It eliminates false positives without requiring human intervention.

3 Ways to Create Cyber Incidents in DFLabs IncMan

At the heart of incident response, and by extension of Security Automation and Orchestration technologies, resides the Cyber Incident. A typical definition of a cyber security incident is “Any malicious act or suspicious event that compromises or attempts to compromise, or disrupts or tries to disrupt, a critical cyber asset”. Almost everything we do in a SOC or a CSIRT is based on incidents, and there are a variety of potential incident sources, for example:

  1. Alerts from cyber security detection technologies such as Endpoint Detection & Response or User Entity Behavior Analytics tools
  2. Alerts from Security Information & Event Management Systems (SIEM)
  3. Emails from ITSM or case management systems
  4. Website submissions from internal stakeholders and whistle-blowers
  5. Phone calls from internal users and external 3rd parties

This diversity of incident sources means that a solid SAO solution must offer a variety of different methods to create incidents. Regulatory frameworks also frequently mandate being able to originate incidents from different sources. DFLabs IncMan offers a rich set of incident creation options.

There are three primary ways to create incidents in IncMan, offering flexibility to accommodate a variety of incident response process requirements and approaches.

Option 1: Automated Incident Creation

We will feature automated incident creation in a more detail in a future post. In the meantime, I will show you the location of this feature.

Select settings menu, then head to the external sources:

 

cyber incidets incman

 

You will see that under the external sources option there are 3 options available to use as sources to automate incident creation:

  1. Incoming events automation, for CEF/Syslog
  2. Incoming Mail automation, for a monitored email account
  3. Integrations, for all QIC integration components.

Automating incident creation supports a variety of filters to support a rules-based approach. In addition, it is also possible to create incidents using our SOAP API. Certified 3rd party applications use this mechanism to create incidents within IncMan, for example, Splunk.

Option 2: Manual Incident Creation

Click the incidents menu option, then click the + symbol selecting the incidents screen

 

cyber incidets incman 1

 

Fill out all mandatory fields (these can be defined in the custom fields screen) then step through and complete the incident wizard to create the incident:

 

cyber incidets incman 2

 

Once all relevant fields have been completed, click save and this incident will then appear in the incident view and apart of the queue you assigned in the details screen.

Option 3: Incident creation from source

Select an incident source for the incident you want to create, for example, a Syslog or CEF message, an Email, or a Threat intelligence source (STIX/TAXI, ThreatConnect):

 

cyber incidets incman 3

 

In this screen, you can then convert this source item to an incident, or link the source to an existing incident.

Finding a Balance Between Rapid and Measured Incident Response

Since I am a new face (or perhaps just a name to most of you) here at DFLabs, I wanted to take a moment to introduce myself before we jump into the topic for today. My name is John Moran and I recently joined the DFLabs team as Senior Product Manager. Prior to joining the DFLabs team, I worked in a variety of roles, including incident response consulting, security operations and law enforcement. While I have many responsibilities at DFLabs, one of my primary roles and the one that I am perhaps most passionate about is ensuring that DFLabs continues to bring you the industry leading security orchestration, automation and response feature that you have come to expect from IncMan. If you have feature requests, suggestions or other comments, good or bad, regarding IncMan, I’d love to hear from you. Please reach out to me at [email protected]. With that out of the way, let’s get to the good stuff…

While reports such as the Verizon DBIR indicate that the increased focus on creating holistic, detect and respond security programs has had a positive impact on reducing the time to detect security incidents, these same reports have also shown that attackers are continuing to evolve. There is still a continuing gap from compromise to detection. what I would like to discuss here instead though, might be described as the opposite problem; overreaction to a perceived security incident, or conducting a full-scale response to a security incident prior to validating that a security incident has indeed occurred.

Please do not misunderstand what I am saying, I will always advocate the “treat it as an incident until you know otherwise” approach to incident response. However, I would also encourage that the response to any security incident should always be a measured response. The incident response process must be rapid and decisive; but just as under-responding to an incident can present serious financial and reputational risks to an organization, so too can over-responding to a potential security incident. As with any other business process, incident response must provide value to an organization. Continued over-response to perceived security incidents will reduce the overall value that incident response provides to an organization, and over time will result in decreased support from management.

Few studies have truly been able to quantify the costs associated with failing to conduct a measured response. A 2015 study by the Ponemon Institute suggests that response to incidents detected based on erroneous or inaccurate malware alerts costs large organizations up to 395 hours-per-week, or almost $1.3 million a year. It is important to note that this study only took into consideration time spent investigating malware alerts. While malware detection technologies have undoubtedly improved in the two years since this study was conducted, most organizations have a variety of detection technologies, all generating alerts which must be investigated. It was assumed by Ponemon that the organizations surveyed were conducting an appropriate, measured response to each of these false positives. With the cost already so high, it is easy to conclude how costly over-responding to incidents can become at scale.

While conducting incident response consulting, I have personally seen organizations spend weeks to months conducting full-scale incident response activities before spending tens of thousands of dollars for incident response consulting, only to find out that the perceived incident was based on faulty information or conclusions. So how do you minimize the risk of over-responding while continuing to ensure that each potential incident is properly investigated? Here are five tips based on my experience:

  1.  Have the right people in place – There is simply no substitute for having the right people in place. While proper training and experience are vital, the qualities of an effective analyst extend beyond these two attributes. It is crucial to have analysts who possess an analytical mindset and can remain level-headed amidst a stressful and dynamic environment. Training and be provided, the experience can be gained, however, some of these less tangible qualities are much harder to learn.
  2.  Have the right toolsets in place – Attempting to substitute tools for skills will inevitably lead to failure. However, it is important to have the proper tools in place to give those highly skilled analysts the information they need to make fact-based conclusions. Even the most highly skilled analysts will inevitably arrive at the wrong conclusion when presented with incomplete or inaccurate information.
  3.  Know the threat landscape – Threat intelligence, and I mean actual intelligence, not just a machine-readable threat feed, can provide much greater context surrounding a potential security incident. Analysts must also be provided the opportunity to remain up-to-date on the ever-changing threat landscape. This can allow decision makers a much more accurate perspective on which to base their initial level of response. Often, it is a lack of knowledge and conclusions based on assumptions that lead to a dramatic over-response.
  4.  Know your limitations – Unless you are fortunate enough to work for a government agency or one of the world’s largest organization, chances are at some point your needs may exceed the scope of your internal capabilities. These limitations are not weaknesses in and of themselves. Instead, the risk here presents itself when an organization fails to realize its limitation and attempts to work outside of those bounds. It is important to know when to consider tapping into external resources such as consulting, incident response retainers and managed services.
  5.  Replace the emotional response with processes and procedures – Even the most highly skilled analysts will approach some potential security incidents with certain biases or preconceived notions. It is essential to implement quality processes and procedures which maximize the analyst’s skills, take full advantage of the available tools, and guide the incident response process. Processes and procedures surrounding incident validation, incident classification and initial resource allocation can ensure that the process stays on track and avoid straying down the wrong, costly road.

The most important goal of any security program must always remain to never under-respond to an incident. However, integrating these five tips into your security program will undoubtedly provide a better, more efficient process to determine what the appropriate level of response to each potential security incident should be, greatly reducing the risk of over-responding.

Using IncMan Dashboards and Widgets

Today, we will talk about our dashboards in IncMan. We will see how to add, delete and generally organize the dashboard widgets. IncMan widgets can display charts, graphs and tables to display and track Key Performance Indicators. IncMan supports role-based dashboards. This is a key requirement for any SOC, facilitating that the right information is available to the right person based on their role, duties, and needs. Which information is required for any individual or team will differ from organization to organization, so we support customization to create unique and dedicated dashboards for every persona.

How to use IncMan Dashboards and Widgets

Incman dashboards and widgets 1

 

This default screen displays a number of out of the box charts to get you started. But you will want to customize the dashboard with the widgets you need for your role.

1. To begin creating your unique dashboard, select “Customize” to open the menu.

 

Incman dashboards and widgets 2

 

2. The dashboard screen is split into 4 distinct parts: top, left, right and bottom. By selecting the “+” symbol, you can add an additional widget from a number of pre-defined templates. For this example, let’s add the “Incident Overview” widget:

 

Incman dashboards and widgets 3

 

3. You can change the name of the widget in the configuration screen, for example, “GDPR” or “Urgent Incidents”. You can also specify the applicable timeframe for the widget, and the refresh rate, to determine how often the widget will be updated.

4. Next, we will configure the widget filters to determine the data that the widget displays.

 

Incman dashboards and widgets 4

 

We can apply search filters to narrow down the displayed incidents. You can filter by a variety of attributes, including tags, incident priority, the Incident Response process stage, and any custom fields you have defined. Every filter that is selected will also need a corresponding value assigned to it in the values tab.

 

Incman dashboards and widgets 5

 

5. Once you’ve selected the values you want to add into the table, the final step allows you to define which columns will be displayed in the widget.

 

Incman dashboards and widgets 6

 

Enjoy!

GDPR & Breach Notification – Finally We Will Get Some European Breach Data

The EU GDPR will be enforced from May 25th next year. GDPR mandates a wide variety of requirements on how data processors must manage customer and 3rd party data. Although it is not primarily focused on cybersecurity, it does contain vague requirements on security monitoring. This includes that data processors must establish a breach notification procedure, that include incident identification systems, and must be able to demonstrate that they have established an incident response plan.

GDPR and Data Breach Notification

Further, there is a requirement to be able to notify the supervisory authority of a data breach within 72 hours of becoming aware of a data breach or face a stiff financial penalty. This last requirement is of special interest beyond the impact on data processors. Because it means that for the first time, we will begin having reliable data on European breaches.

Historically, European companies have had no external requirement to be transparent about being affected by a breach. This has had the consequence that we have not had good data or an awareness of how well or badly European organizations are doing when it comes to preventing or responding to security breaches.

I am sure that if like myself, you have worked in forensics and incident response in Europe over the years, you are aware of far more breaches that are publicly disclosed. The only information available is when a breach is disclosed due to the press and law enforcement, or the impact is so great that it can’t be ignored. We also have some anonymized reports from some vendors and MSSP’s, but these are really no more than samples. While not without benefit, these also do not provide a reliable indicator, as the samples are not necessarily statistically representative This provides a false sense of how European organizations are faring compared to other regions and presents a skewed image of European security in general.

The true state of European security is an unknown and has been difficult to quantify. I have seen German articles for example that have claimed that German Security is better than the rest of the world because there are less known breaches. The absence of evidence is of course not evidence of absence. Something that has not been quantified cannot be said to be good or bad. More importantly, if you do not measure something, it cannot be improved.

It will be interesting to see whether GDPR will force European organizations to place more focus on Incident Detection and Response, and give us insight into the true state of European security.

Learn everything you need to know about GDPR breach notification requirements and how to prepare your organization for them in our new whitepaper: Incident Response with the new General Data Protection Regulation. Download your free copy here