What is Threat Intelligence
Threat Intelligence has morphed from a catchy marketing buzzword to a highly sought-after tool, which when used correctly, can bring immense value to an organization. However, because it is in high demand and organizations are researching and adopting it in some form or another, the market has become flooded with products and services promising to provide “Threat Intelligence” to an organization. Unfortunately, in many cases, the “Threat Intelligence” provided is only one piece of a larger puzzle.
When working with Threat Intelligence it is easier to look at it as two separate concepts:
- Threat Data (aka Threat Feeds)
- Threat Context (aka Intelligence)
These concepts combined produce the relevant and actionable “Intelligence” organizations need to better align their security goals with their business’s long-term objectives.
Threat Data is raw data feeds which include artifacts such as malicious IPs or URLs which generally lack context regarding the why behind their motives or malicious behavior. Threat data alone cannot provide the intelligence necessary to make informed decisions regarding the security of our environments, but when paired with Threat Context we are given a clearer picture of its risk towards our organization.
Threat Context is more elusive and is usually where organizations fall short when implementing a Threat Intelligence program. To apply “context” an organization must have a clear goal of what they are trying to achieve by introducing a piece of threat data into their security program. Without a clear vision, threat intelligence can become an expensive drain on resources with little to any real value.
Threat Intelligence Challenges
As more organizations begin to adopt threat intelligence practices into their security programs the need for a more structured implementation path has become greater. Threat intelligence implementation is a marathon process which needs to be carefully planned and executed to ensure it is agile and built on a strong foundation.
Understanding some common challenges organizations have faced while building their threat intelligence program can provide valuable information to those organizations looking to adopt threat intelligence into their security monitoring program.
Does Not Align with Business Goals
One of the biggest mistakes made when implementing a threat intelligence program is the failure to ensure its use is identified by a risk to the business. When evaluating threat intelligence feeds, security teams will want to identify the business problem they will help solve and examine how they will utilize these data sources in conjunction with their internal threat intelligence feeds.
Performing a risk assessment can help identify the risks an organization may face and what can be done to minimize its impact on the business. This practice will arm an organization with valuable information on how best to protect their business and what types of intelligence will make the most impact for their organization.
Choosing the Wrong Intelligence Data
Over the past couple of years, threat intelligence data or feeds have become synonymous with a threat intelligence program. This data is a crucial part of an intelligence program, but without context, an organization runs the risk of adding yet another data source without fully recognizing its value. When evaluating threat intelligence data, consider the following:
- What is the focus?
A majority of threat intelligence feeds focus on a single area of interest such as malicious domains, IP addresses, or hash values. Knowing how these feed types will be utilized within your organization will determine their overall value.
- Where is the information gathered from?
There is an endless number of free and paid threat intelligence subscription services available to take advantage of, but not all data sources are created equal. There are six main types of intelligence data sources to be aware of when evaluating a threat feed:
- open source
- malware processing
- human intelligence
- internal telemetry
Organizations will want to have a good understanding of where these feeds are derived from and ensure, especially if they are delivered via a paid service, that they can be evaluated against their internal intelligence to recognize their maximum potential.
- What frequency are they updated?
Ensuring threat intelligence feeds are updated and relayed at near real-time is an invaluable feature of any reputable data source. Ingesting stale or incomplete data can cause an organization to focus on the wrong objectives which can lead to data overload and alert fatigue.
Asking these questions when evaluating a new threat feed will help identify what sources of intelligence may be the best fit for your business need, but the real value will be displayed through its analysis. Performing proper analysis of a threat intelligence feed is what will provide the context necessary for an organization to make operational changes to better secure their environment. Without analysis, these feeds become another potentially costly, unmanageable source of noise.
Failure to Operationalize Intelligence Data
The ability to utilize threat intelligence data in an operational capacity is the ultimate goal of a threat intelligence program. A successful program will present an organization with greater insight into the potential threats their environment faces and provide its security team a way to prioritize their alerting based on the risk it poses to business. Failure to align an organization’s security program with their business objectives can have a direct impact on the intelligence sources they utilize and how they are able to operationalize their intelligence.
Overcoming these challenges while implementing a threat intelligence program can be tricky. It is an ongoing, and at time tedious, process which if implemented correctly will adapt as your business grows. If you do find yourself up against any of these challenges, take a step back and make sure that the utilization of the intelligence source fits a business objective, is sourced appropriately for its use case, and it can be utilized to make operational changes. If you can answer yes to all of these criteria, you are on your way to achieving a higher cyber threat intelligence.
We’ve been witnessing the continual transformation of the cyber security ecosystem in the past few years. With cyber attacks becoming ever-more sophisticated, organizations have been forced to spend huge amounts of their budgets on improving their security programs in an attempt to protect their infrastructure, corporate assets, and their brand reputation from potential hackers.
Recent research, however, still shows that a large number of organizations are experiencing an alarming shortage of the cyber security skills and tools required to adequately detect and prevent the variety of attacks being faced by organizations. Protecting your organization today is a never-ending and complex process. I am sure, like me, you are regularly reading many cyber security articles and statistics detailing these alarming figures, which are becoming more of a daily reality.
Many organizations are now transitioning the majority of their efforts on implementing comprehensive incident response plans, processes and workflows to respond to potential incidents in the quickest and most efficient ways possible. But even with this new approach, many experts and organizations alike express concerns that we will still be faced with a shortage of skilled labor able to deal with these security incidents, with security teams struggling to fight back thousands of potential threats generated from incoming security alerts on a daily basis.
With so many mundane and repetitive tasks to complete, there’s little time for new strategies, planning, training, and knowledge transfer. To make things worse, security teams are spending far too much of their valuable time reacting to the increasing numbers of false positives, to threats that aren’t real. This results in spending hours, even days on analyzing and investigating false positives, which leaves little time for the team to focus on mitigating real, legitimate cyber threats, which could result in a serious and potentially damaging security incident. Essentially, we need to enable security operations teams to work smarter, not harder; but is this easier said than done?
How does security orchestration and automation help security teams?
With this in mind, organizations need to find new ways combat these issues, while at the same time add value to their existing security program and tools and technologies being used, to improve their overall security operations performance. The answer is in the use of Security Orchestration, Automation and Response (SOAR) technology.
Security Orchestration, Automation, and Response SOAR solutions focus on the following core functions of security operations and incident response and help security operations centers (SOCs), computer security incident response teams (CSIRTs) and managed security service providers (MSSPs) work smarter and act faster:
- Orchestration – Enables security operations to connect and coordinate complex workflows, tools and technologies, with flexible SOAR solutions supporting a vast number of integrations and APIs.
- Automation – Speeds up the entire workflow by executing actions across infrastructures in seconds, instead of hours if tasks are performed manually.
- Collaboration – Promotes more efficient communication and knowledge transfer across security teams
- Incident Management – Activities and information from a single incident are managed within a single, comprehensive platform, allowing tactical and strategic decision makers alike complete oversight of the incident management process.
- Dashboards and Reporting: Combines of core information to provide a holistic view of the organization’s security infrastructure also providing detailed information for any incident, event or case when it is required by different levels of stakeholders.
Now let’s focus on the details of these core functions and see how they improve the overall performance.
Security Orchestration is the capacity to coordinate, formalize, and automate responsive actions upon measuring risk posture and the state of affairs in the environment; more precisely, it’s the fashion in which disparate security systems are connected together to deliver larger visibility and enable automated responses; it also coordinates volumes of alert data into workflows.
With automation, multiple tasks on partial or full elements of the security process can be executed without the need for human intervention. Security operations can create sophisticated processes with automation, which can improve accuracy. While the concepts behind both security orchestration and automation are somewhat related, their aims are quite different. Automation aims to reduce the time processes take, making them more effective and efficient by automating repeatable processes and tasks. Some SOAR solutions also applying machine learning to recommend actions based on the responses to previous incidents. Automation also aims to reduce the number of mundane actions that must be completed manually by security analysts, allowing them to focus on a high level and more important actions that require human intervention.
Incident Management and Collaboration
Incident management and collaboration consist of the following activities:
- Alert processing and triage
- Journaling and evidentiary support
- Analytics and incident investigation
- Threat intelligence management
- Case and event management, and workflow
Security orchestration and automation tools are designed to facilitate all of these processes, while at the same making the process of threat identification, investigation and management significantly easier for the entire security operations team.
Dashboards and Reporting
SOAR tools generate reports and dashboards for a range of stakeholders from the day to day analysts, SOC managers, other organization departments and even C-level executives. These dashboards and reports are not only used to provide security intelligence, but they can also be used to develop analyst skills.
Human Factor Still Paramount
Security orchestration and automation solutions create a more focused and streamlined approach and methodology for detection and response to cyber threats by integrating the company’s security capacity and resources with existing experts and processes in order to automate manual tasks, orchestrate processes and workflows, and create an overall faster and more effective incident response.
Whichever security orchestration and automation solution a company chooses, it is important to remember that no one single miracle solution guarantees full protection. Human skills remain the core of every future security undertaking and the use of security orchestration and automation should not be viewed as a total replacement of a security team. Rather, it should be considered a supplement that enables the security team by easing the workload, alleviating the repetitive, time-consuming tasks, formalizing processes and workflows, while supporting and empowering the existing security team to turn into proactive threat hunters as opposed to reactive incident investigators.
Humans and machines combined can work wonders for the overall performance of an organization’s security program and in the long run allows the experts in the team to customize and tailor their actions to suit the specific business needs of the company.
Finally, by investing in a SOAR solution for threat detection and incident response, organizations can increase their capacity to detect, respond to and remediate all security incidents and alerts they are faced with in the quickest possible time frames.
Security teams are inundated with a constant barrage of alerts. Depending on the severity of each alert, it is often minutes to hours before an analyst can properly triage and investigate the alert. The manual triage and investigation process adds additional time, as analysts must determine the validity of the alert and gather additional information. While these manual processes are occurring, the potential attacker has been hard at work; likely using scripted or automated processes to probe the network, pivot to other hosts and potential begin exfiltrating data. By the time the security team has verified the threat and begun blocking the attacker, the damage is often already done.
So, how can security operations temporarily contain a possible threat and/or permanently block a known threat? This blog will explain how by utilizing the IncMan SOAR technology from DFLabs with its integration with McAfee Web Gateway, including a use case example in action.
DFLabs and McAfee Web Gateway Integration
McAfee Web Gateway delivers comprehensive security for all aspects of web traffic in one high-performance appliance software architecture. For user-initiated web requests, McAfee Web Gateway first enforces an organization’s internet use policy. For all allowed traffic, it then uses local and global techniques to analyze the nature and intent of all content and active code, providing immediate protection. McAfee Web Gateway can examine the secure sockets layer (SSL) traffic to provide in-depth protection against malicious code or control applications.
Attackers are scripting and automating their attacks, meaning that additional infections and data exfiltration can occur in mere seconds. Security teams must find new ways to keep pace with attackers in order to minimize the impact from even a moderately skilled threat. Utilizing DFLabs IncMan’s integration with McAfee Web Gateway, IncMan’s R3 Rapid Response Runbooks automate and orchestrate the response to newly detected threats on the network, enabling organizations to immediately take containment actions on verified malicious IPs and ports, as well as temporarily preventing additional damage while further investigation is performed on suspicious IP addresses and ports.
Use Case in Action
McAfee Web Gateway has generated an alert based on potentially malicious traffic originating from a host inside the network to an unknown host on the Internet. Based on a predefined Incident Template, IncMan has automatically generated an Incident and notified the Security Operations Team. As part of the Incident Template, the following R3 Runbook has been automatically added to the Incident and executed.
Data exfiltration can occur in mere seconds. By the time a security team has validated the threat and blocked the malicious traffic, it is often too late. DFLabs integration with McAfee Web Gateway allows organizations to automatically contain the threat and stop the bleeding until further action can be taken.
The Runbook begins by performing several basic Enrichment actions, such as gathering WHOIS and reverse DNS information on the destination IP address. Following these basic Enrichment actions, the Runbook continues by querying two separate threat reputation services for the destination IP address. If either threat reputation service returns threat data above a certain user-defined threshold the Runbook will continue along a path which takes additional action. Otherwise, the Runbook will record all previously gathered data, then end.
If either threat reputation service has deemed the destination IP address to be potentially malicious, the Runbook will continue by using an additional Enrichment action to query the organization’s IT asset inventory. Although this information will not be utilized by the automated Runbook, it will play an important role in the process shortly.
Next, the Runbook will query a database of known-good hosts for the destination IP address. In this use case, it is assumed that this external database has been preconfigured by the organization and contains a list of all known-good, whitelisted, external hosts by IP address, hostname and domain. If the destination IP address does not exist in the known-good hosts’ database, the security analyst will be prompted with a User Choice decision. This optional special condition within IncMan will pause the automatic execution of the Runbook, allow the security analyst to review the previously gathered Enrichment information and allow the security analyst to make a conditional flow decision. In this case, the User Choice decision asks the security analyst if they wish to block the destination IP address. If the analyst chooses to block the destination IP address, a Containment action will utilize McAfee Web Gateway to block the IP until further investigation and remediation can be conducted.
If you want to learn more about how to contain threats, block malicious traffic and halt data exfiltration utilizing Security Orchestration, Automation and Response (SOAR) technology, get in touch with one of the team today to request your live one to one demo.
Discussions about security breaches often focus on the planning elements, but simply talking about planning is not enough. Comprehensive plans need to be drawn up, fully executed and regularly reviewed in order to be successful. This is the only way to potentially contain the breach and limit the impact it could have on the organization. Properly planning and implementing is the difference between success and failure for companies when it comes to security and incident response.
As the ever-evolving cyber security landscape poses new challenges, companies are pushed even more to fight back the growing number and even more sophisticated levels of cyber attacks. Organizations across all sectors and industries are potential targets and could become victims at any time. With attacks escalating in all areas, whether via phishing or malware, for example, security operations teams need to be prepared to respond to existing and new types and strains of threats, in order to fully defend and protect their company assets and networks.
Along with prevention becoming increasingly difficult for security teams, some organizations also tend to have a weakness when it comes to incident response. Below outlines some of the main reasons why this failure is happening today and if this a true representation of your organization, it is important for action to be taken in order to improve it.
With the number of sophisticated cyber threats in the past several years growing at a phenomenal rate, the security industry has been facing an explosion of security tools available in the market. Many of these though have adversely resulted in creating more tasks for security teams and analysts in terms of monitoring, correlating, and responding to alerts. Analysts are pushed to work on multiple platforms and generate data from every single source manually, while afterwards then needing to enrich and correlate that data which can take many hours or even days.
Security budgets are often limited, and while it is often easier to gain support and approval for additional security apps and tools than it is for additional staff members, this means that many security teams often are forced to search innovative ways to perform many different tasks with extremely limited personnel resources.
Another important point to note is that with increased market competition for experienced and skilled analysts, companies are often forced to choose between hiring one highly skilled staff member versus a couple of less experienced, junior level ones.
Over the years, organizations have witnessed an increasing number of security tools to fight back the growing number of security threats. But even though these tools manage alerts and correlate through security information and management system, security teams are still overwhelmed by the volume of alerts being generated and in many instances are not physically able to respond to them all.
Every single alert must be verified manually and triaged by an analyst. Then, if the alert is determined to be valid, additional manual research and enrichment must take place before any other action to address the threat. While all of these processes take place, other potential alerts wait unresolved in a queue, while new alerts keep being added. The problem is, any one of these alerts may be an opportunity window for an attacker while they wait to be addressed.
Risk of Losing Skilled Analysts
Security processes are performed manually and are quite complex in nature, therefore training new staff members takes time. Organizations still rely on the most experienced analysts when it comes to decision making, based on their knowledge and work experience in the company, even with documented procedures in place. This is commonly referred to as tribal knowledge, and the more manual the processes are, the longer the knowledge transfer takes. Moreover, highly qualified analysts are considered a real treasure for the company, and every time a company loses such staff member, part of the tribal knowledge is also lost, and the entire incident response process suffers a tremendous loss. Even though companies make efforts to keep at least one skilled analyst who is able to teach other staff members the skills they have, they aren’t always successful in that.
Failure to Manage Phases
Security teams work with metrics that could be highly subjective and abstract, compared to other departments which often work with proven processes for measuring the effectiveness or ineffectiveness of a program. This is largely due to the fact that conservative approaches and methods for measuring ROI aren’t applicable, nor appropriate when it comes to security projects, and might give misleading results. Proper measurement techniques are of utmost importance when it comes to measuring the effectiveness and efficiency of a security program, therefore it is necessary to come up with a measurement process customized according to the needs of the company.
Another important issue that should be mentioned here is the one concerning the management of different steps of the incident response process. Security incidents are very dynamic processes that involve different phases, and the inability to manage these steps could result in great losses and damages to the company. For the best results, companies should focus on implementing documented and repeatable processes that have been tested and well understood.
In order to resolve these issues, organizations should consider the following best practices.
The coordination of security data sources and security tools in a single seamless process is referred to as orchestration. Technology integrations are most often used to support the orchestration process. APIs, software development kits, or direct database connections are just a few of the numerous methods that can be used to integrate technologies such as endpoint detection and response, threat intelligence, network detection, and infrastructure, IT service and account management.
Orchestration and automation might be related, but their end goals are completely different. Orchestration aims to improve efficiency by increased coordination and decreased context switch among tools for a faster and better-informed decision-making, while automation aims to reduce the time these processes take and make them repeatable by applying machine learning to respective tasks. Ideally, automation increases the efficiency of orchestrated processes.
Strategic and Tactical Measurement
Information in favor of tactical decisions usually consists of incident data for analysts and managers, which might consist of indicators of compromise assets, process status, and threat intelligence. This information improves decision-making from incident triage and investigation, through containment and eradication.
On the other hand, strategic information is aimed at executives and managers, and it’s used for high-level decision making. This information might comprise statistics and incident trends, threat intelligence and incident correlation. Advanced security programs might also use strategic information to enable proactive threat hunting.
If these challenges sound familiar within your security operations team, find out how DFLabs’ Security Orchestration, Automation and Response solution can help to address these to improve your overall incident response.
Nowadays, businesses face the fact that cyber attacks are part of the overall picture, and will happen at any given moment. Nobody is in doubt about this, and the question has shifted from ‘if they happen’, to ‘when they happen’. Along with this, cybercriminals have become much more sophisticated, raising the costs of fighting back on all industry levels.
Managing cyber security issues can pose a real challenge within a company. The new and complex networks, business requirements for innovation and new ways of delivery of services require new methods and approaches to the way security is handled. Traditional security management methods no longer work. Today, cyber security management should aim towards efficiency when it comes to possible future threats.
Serious data breaches can cost a company hundreds of millions of dollars. Often, what makes a breach serious is the effectiveness and speed of the incident response process.
This being said, creating an incident response program is of utmost importance. It has to excel in the following areas: visibility, incident management, workflows, threat intelligence, and collaboration/information-sharing. Below we’ll take a closer look at each of these areas and discover their importance from a systems level perspective.
Having in mind the number of security products in an average company, visibility should be the core of any incident response system – this means aggregating data feeds from commercial and open-source products. When setting up an incident response system, specialists should consider platforms that offer support for security products out of the box. Although not all of them support everything by default, the one you choose should be flexible to add bi-directional integrations with security products not supported by default. But even though bi-directional integrations are important for the support of full automation and orchestration, these are not always necessary for each technology. For example, with simple detection and alerting technologies, unidirectional event forwarding integration will do the work. Just check that common methods of event forwarding and data transfer (such as syslog, database connections, APIs, email and online forms) are supported.
A well-structured incident response program should enable orchestration and automation of the security products that the organization uses. Above everything else, it should include the ability to manage the entire incident response process, starting from the basics, such as tracking cases, recording actions during the incident, as well as reporting on critical metrics and KPIs.
Furthermore, a more advanced incident response system should provide the following:
- Phase and objective tracking
- Detailed task tracking, including assignment, time spent and status
- Asset management — tracking all physical and virtual assets involved in the incident
- Evidence and chain of custody management
- Indicator and sample tracking, correlation and sharing
- Document and report management
- Time and monetary effort tracking
One of the key capabilities that should part of the incident response system is the automation and orchestration workflows. The result is more efficient processes and heavy reduction in repetitive tasks for analysts.
These are the core methods for a codification of process workflows: linear-style playbooks or flow-controlled workflows or runbooks.
Both methods have advantages and disadvantages, and as each is suitable for different use cases, they both should be supported by the incident response system. In both cases, workflows should be flexible and support almost any process, and should support the use of built-in and custom integrations, and creating manual tasks that should be completed by an analyst.
The capability of incorporating threat intelligence feeds is one of the most basic requirements for an incident response system. Moreover, with the ability to correlate threat intelligence, it’s easier to discover attack patterns, vulnerabilities, and other current risks without manual analysis. Adding the automated correlation also helps identify whether an ongoing incident shares common factors with any previous incidents. But even though automated correlation is crucial for analysts to make decisions, visual correlation is also important. Visualizations of threat intelligence and correlated events are particularly useful for threat hunting and detecting attacks/patterns that could not have been detected using other methods.
Collaboration and Information-Sharing
Incident response is never a one-person show. Generally, it requires the participation of many people, and often of multiple teams. To be highly effective in such an environment, an incident response system should support seamless collaboration and information-sharing between all stakeholders and team members.
This means that authorized staff members should have access to the status of the incident and other generated information, including team members actions. Also, all staff members should communicate in a secure fashion, using out-of-band communications mechanism.
Furthermore, information-sharing and cooperation should be a regular practice with external entities, especially with law-enforcement agencies. Information-sharing, such as threat intelligence reports, is vital in the fight against cybercrime.
Most companies will experience data breach sooner or later, and how they respond will affect the future of the business. These essential components will help ensure that an organization’s incident response program can detect, contain and mitigate a breach before it can reach more serious status.
Forensic incidents can be complex and difficult to manage. Large-scale forensic investigations involve dozens or even hundreds of assets, and this information must be recorded, managed and correlated to be effective. DFLabs and OpenText are key partners in delivering these capabilities. This blog post will outline some of the key challenges that security operations are tackling when it comes to effective forensics management, how they can be resolved and briefly present a use case of the integration in action.
Acquiring forensic data from dozens, even hundreds of potentially impacted hosts across an enterprise can pose a real challenge. This is especially true when these hosts span across continents. Once this data is acquired, it must be organized, enriched and correlated before effective analysis can begin. This results in potentially hundreds of analyst hours lost performing these repetitive tasks before any actual investigative work can take place, during which time, potential attackers could be continuing to further compromise the network or exfiltrate data.
DFLabs integration with EnCase via its IncMan SOAR platform, allows users to more quickly gather critical asset data, manage this data and further enrich this data using IncMan’s orchestration and automation capabilities. It helps to solve these specific security operations challenges often faced by analysts on a daily basis:
- How can I quickly gather host information from endpoints across my infrastructure?
- How can I correlate and enrich data collected from across the different hosts in my infrastructure?
- How can I track my evidence, including acquisition information, location and chain of custody?
- How can I manage all the findings from my forensic examination in one location, correlate and enrich them?
Complete Forensic and Evidence Management
EnCase from OpenText is the premier digital investigation platform for both law enforcement and private industry. EnCase allows acquisition of data from the greatest variety of devices, including over 25 types of mobile devices such as smartphones, tablets, and GPS devices. EnCase enables a comprehensive, forensically sound investigation and produces extensive reports on findings while maintaining the evidence integrity. EnCase Enterprise, built specifically for large enterprise clients, allows forensic analysts to reach across the enterprise network, gathering critical forensic data from hosts across a campus or across the world.
By integrating with OpenText EnCase, DFLabs IncMan SOAR can harness the power of EnCase Enterprise Snapshots, making gathering critical forensic artifacts from hosts around the globe a seamless task. Once this information has been collected by EnCase, IncMan automatically organizes this data by host, performs correlation, and allows a user to harness the power of IncMan’s other integrations to further enrich this information.
In addition to Snapshot information, IncMan is also able to ingest EnCase bookmarks, correlating forensic tools and findings between EnCase cases, as well as acquisition information, making the tracking of forensic clones easier than ever before.
Use Case in Action
An IDS alert for suspicious activity on a host has automatically generated an Incident within IncMan, triggering an investigation. Utilizing IncMan’s EnCase Snapshot EnScript, an analyst performs a snapshot of the host in question, gathering critical process, network and handle information.
Using IncMan’s enrichment capabilities on the newly acquired snapshot information, a suspicious process and several suspicious network connections have been identified, prompting the need for a more detailed forensic investigation.
Utilizing several of IncMan’s containment integrations, traffic from the suspicious IP addresses has been temporarily blocked and the process’s hash value has been banned from running across the environment.
A forensic clone of the host is created to permit a more detailed forensics and root cause analysis. Once the forensic clone is created, IncMan’s Bookmarks and Clones EnScript is used to transfer information regarding the clone from EnCase to IncMan, making tracking the clone’s location and verification simple and easy.
Based on the forensic analysis of the host, a suspicious executable and configuration files have been identified and bookmarked for further analysis. Utilizing IncMan’s Bookmarks and Clones EnScript, these EnCase bookmarks are imported in to IncMan to permit improved tracking and information sharing between analysis.
Making use of one of IncMan’s several integrations with various sandboxing technologies, the executable bookmarked in EnCase is identified as a variant of known malware. Further research on this known malware variant leads to a remediation strategy for the infection of this host.
If you currently use EnCase from OpenText and would like to learn more, request a bespoke one to one demonstration of the integration with DFLabs’ SOAR platform. See for yourself how we can help you to free up valuable analyst time and improve the overall performance of your security program by automating host data acquisitions, tracking and managing important information, while storing all forensic artifacts in a single location for easier use and correlation.
Also for further reading, check out our white paper titled “DFLabs IncMan SOAR: For Incident and Forensics Management”.
Performing threat hunting and incident response on live hosts, collectively referred to here as live analysis, can be a complicated task. When performed properly, they can detect and preserve volatile artifacts, such as network connections, running processes, hooks and open files, which may be the only evidence of today’s advanced attacks. Live analysis may also be the only option when taking a host offline for traditional disk forensics is not an option, such as with business-critical application servers or domain controllers. However, if performed improperly, they can alert attackers to your presence, destroy critical information or render any evidence gathered inadmissible in legal proceedings.
Live forensics and live threat hunting
Live forensics and live threat hunting begin as two different processes. When performing live forensics, we typically start with a pivot point; something has already been detected as anomalous which has prompted us to examine the host. During live threat hunting, we are seeking that anomaly, that indicator of potential malicious activity, to use as a pivot point for further investigation. Once that initial indicator has been discovered, the traditional incident response process, often involving further live forensics begins.
Performing live analysis poses several unique challenges when compared to traditional offline disk forensics. Although any forensic process must be documented and repeatable, these attributes are especially important when performing live analysis. Unlike offline disk forensics, where the original evidence should theoretically remain static and unchanged, live evidence is constantly changing. In fact, we are changing the live evidence by performing live forensics. Although the live analysis process is repeatable, it cannot be repeated while achieving exactly the same results; processes start and end, network connections are terminated, and memory is re-allocated. This means that our live analysis processes must be able to stand up to increased scrutiny.
Because live analysis involves executing commands on a running host, it is crucial that the process is also performed in a secure manner. Only trusted tools should be executed. Each tool and the commands used to execute them should be tested prior to being executed during a live analysis to ensure that the results are known and only the intended actions occur. It is also important to ensure that the tools and commands you tested are the same ones being executed during each live analysis situation.
On Friday, September 7th, I will be speaking at the SANS Threat Hunting and IR Summit in New Orleans regarding some of the challenges and best practices when performing threat hunting and incident response on live hosts. I will also be demoing DFLabs free tool, the No-Script Automation Tool (NAT), which can be used to assist in the live data acquisition process. If you have not had a chance to see NAT, please check out our blog post here, and our demo video here.
Also, find out which top cyber security events DFLabs will attend this fall.
I hope to see you all at the SANS Threat Hunting and IR Summit soon. Safe travels and avoid the storm!
Attending face-to-face events does wonders for career networking and acquiring knowledge, plus it’s always incredibly helpful to see the latest advancements in technology first-hand, view a new tool in action, or simply get some answers to questions you have from industry experts.
This becomes even more important if your organization wants to stay up to date with the latest security trends and ahead of the ever-evolving cyber threats, especially with such a quickly evolving threat-landscape we are faced with today. If this is the case, then attending these top-notch cyber security conferences in the months ahead should be a priority for you and your security team, whether you are a C-level executive, a security operations manager or security analyst, there will be something there to benefit you.
There are a growing number of events taking place around the globe with cyber security as its main focus this fall. These gather tech enthusiasts, developers, pioneers, security experts, and many other masterminds, all at the same venue with a single goal in mind – to improve their cyber security ecosystem. Picking the conferences and summits a company should attend may be a real challenge as there are so many to choose from and this is exactly why we prepared a quick guide on some of the most exciting events to be at, large and small alike. It’s not too late to plan your travel!
So, here’s the lineup of our top-rated cyber security events where DFLabs will be present, that will give you the opportunity to chat with your peers, attend presentations and hear keynotes, engage in discussions about the dark web, cyber espionage, malware and more importantly, incident response and how to detect, respond to and remediate potential security incidents, as well as many other topics.
6-7 September 2018, New Orleans, US
This two-day in-depth summit is focused on the latest in threat hunting and incident response techniques that can be used to successfully identify, contain and eliminate adversaries targeting your networks. The summit will put special focus on the effectiveness of threat hunting in reducing the dwell time of adversaries, providing actionable threat hunting strategies, as well as tools, tactics, and techniques that can be used to improve the defense of companies’ organization. Our Senior Product Manager, John Moran, will also be speaking on the subject of “Threat Hunting Using Live Box Forensics”.
13-14 September, 2018, Warsaw, Poland
The SCS conference consists of presentations from leading world authorities in the cyber security realm. This conference gathers leading international companies with presentations focused on cyber security, as well as guests from all around the globe, while maintaining a large Polish presence. DFLabs, along with its Polish based partner, Orion Instruments Polska, will be engaging with the audience during live presentations, as well as on the exhibition floor during this 2-day event.
18-19 September, Copenhagen, Denmark
DFLabs is a proud sponsor of Think In 2018, LogPoint’s first ever customers and partners conference. With the recent integration of LogPoint’s SIEM with DFLabs’ SOAR solution, this conference will provide a unique opportunity to connect with both organizations in one place and enable you to ask important questions in relation to how this joint solution can support your business needs. See first-hand a comprehensive joint demonstration during the live briefing sessions regarding how to integrate an effective incident response program combining the power of SIEM and SOAR technology.
18-20 September, Singapore
GovWare is into its 27th year and is the cornerstone event for the Singapore International Cyber Week featuring the latest trends in all things cybersecurity, focused around the Government sector. DFLabs with its partner PCS Security will be showcasing its solutions to “Control Your Cloud”, where you can learn how to create a more efficient and effective response to cyber security incidents.
18-19 September, London, UK
SINET is dedicated to building a cohesive, worldwide cybersecurity community with the goal of accelerating innovation through collaboration. SINET is a catalyst that connects senior level private and government security professionals with solution providers, buyers, researchers and investors. DFLabs is delighted to be participating in and sponsoring this London event to share knowledge and broaden the awareness and adoption of innovative cybersecurity technologies.
9-11 October, Nuremberg, Germany
it-sa is Europe’s largest exhibition for IT security and one of the most important worldwide events where experts will be providing information on current issues, strategies and technical solutions. In partnership with Softshell, DFLabs will be showcasing its latest solution features to enable organizations to transform their security operations, acting as a force multiplier for their security team to decrease the time to detect and resolve incidents.
14-18 October, Dubai, UAE
If you’re talking technology within the Middle East, Africa and Asia, GITEX is the place to be. Right from world-famous industry names to Silicon Valley’s hottest startups, everyone heads to GITEX in anticipation of big business partnerships, future-ready gear and booming success. As the largest technology event in the Middle East, Africa and South Asia, see new technologies and innovation come alive. During GITEX Technology Week, DFLabs will be available with our partner RAS Infotech at booth G02.
If you are attending one or more of these events, or even if your aren’t able to attend and would like to learn more about our ever-evolving Security Orchestration, Automation and Response platform and to improve the performance of your security program, do make sure to get in touch, whether for an informal chat, a more formal discussion or to see a live demo.
We look forward to hearing from you and seeing you there!