Automated Responder Knowledge (ARK) in Action

We released our Machine Learning Engine PRISM in our most recent 4.2 release. The first capability that we developed from PRISM is our Automated Responder Knowledge (ARK). This capability will change the way incident responders and SOC analysts respond to incidents, and how they share and transfer their entire knowledge to the rest of the team. The key to this capability is that it learns from your own analyst’s responses to historical incidents to guide the response to new ones.

We are not re-inventing the wheel with this feature. SOC and Incident Response teams have been doing this the old-fashioned way for a long time – through 6-12 months training. What we’re doing is providing a GPS and Satellite Navigation, guiding the wheel and giving you different paths to choose from according to the terrain you are in.

We do this by analyzing incidents and their associated attributes and observables, to work out how closely they are related. Then we can suggest actions and playbooks based on your organizations’ historical responses to similar threats and incidents.

Automated Responder Knowledge (ARK)

Using Automated Responder Knowledge (ARK) in IncMan

Step 1: Not really a step – as it’s done automatically by Automated Responder Knowledge (ARK), but this occurs in the background for every incoming incident. Every Incident possesses a feature space1 that contains all the information related to it, composed of every attribute, associated observable and attached evidence. ARK analyses the feature spaces associated with every incident ever resolved. When a new incident is opened, it is scored and ranked and then compared by ARK to the historical model to identify related incidents or actions based on similar and shared attributes. The weighting of the ranking can be customized by analysts.

 

Automated Responder Knowledge (ARK)

 

Step 2: Open the incident, selecting the applicable incident type. To save time, you can create an incident template to prepopulate some of the contexts automatically in future.

 

Automated Responder Knowledge (ARK)

 

Step 3: Select Playbooks, and PRISM.

In the next screen, you will see a variety of suggested related actions and related incidents based on the feature space that your incident type is matched with. The slider at the top is used to determine the weighting in ranking for actions that are suggested. For example, if I move the slider to the left, the entire feature space actions appear, then if I move the slider to the far-right only a few actions appear from highly ranked incidents.

 

Automated Responder Knowledge (ARK)

 

 

Automated Responder Knowledge (ARK)

 

Step 4: Determine which automation and actions you want to use from the suggestions. After saving, you will be presented with options such as Auto-Commit, Auto-Run, Skip Enrichment, Containment, Notification or Custom Actions. You have the ability to select only the actions you want to automate. If you are concerned about running containment automatically, for example, you just deselect those options.

 

Automated Responder Knowledge (ARK)

 

Step 5: The automated actions are executed, resolving the incident, based on prior machine-learning generated automated responder knowledge.

ARK is designed to facilitate knowledge transfer from senior to junior analysts and to speed up incident response by applying machine learning to automate the knowledge gathering and analysis.

When is Security Automation and Orchestration a Must-Have Technology? – Addressing Gartner’s SOAR Question

Last week, Anton Chuvakin from Gartner announced that Augusto Barros and himself are planning to conduct research in Q4 2017 on the topic of Security Orchestration, Automation and Response (SOAR), or Security Automation and Orchestration, depending on which analyst firms’ market designation you follow. At DFLabs we are very excited that Gartner is finally showing our market space some love and will be helping end users to better assess and differentiate SAO offerings.

Anton provided many questions that he wanted SAO vendors to prepare for. The questions immediately piqued our interest, with one question, in particular, standing out to us.

1.When is SOAR a MUST have technology? What has to be true about the organization to truly require SOAR? Why your best customer acquired the tools?

Anton also said that he had one main problem with Security Automation and Orchestration. In his own words, “For now, my main problem with SOAR (however you call those security orchestration and automation tools…if you say SOAPA or SAO we won’t hate you much) is that I have never (NEVER!) met anybody who thought “my SOAR is a MUST HAVE.”

The question is not entirely unwarranted. During my own time at Gartner covering the SOAR space, I spoke to many clients who were seeking an SAO solution without knowing that they were. Typical comments were, “I have too many alerts and false positives to be able to deal with them all”, or “We are struggling to hire enough skilled people to be able to respond to all of the incidents that we have to manage”. Another common comment was, “I am struggling to report operational performance to my executives?”. Often, these comments were followed by the question, “Do you know of any technology that can help?”.

Typically, these organizations had a mature security monitoring program, usually built around a SIEM. They often had critical drivers, such as regulatory requirements, or held sensitive customer data. We hear the same buying drivers from our own customer base.

To sum up the most common drivers for someone asking about Security Automation and Orchestration:

  1.  A high volume of alerts and incidents and the challenge in managing them
  2.  A large portfolio of diverse 3rd party security detection products resulting in a large volume of alerts
  3.  Regulatory mandates for incident response and breach notification
  4.  An overstretched security operations team
  5.  Reporting risk and the operational performance of the CSIRT and SOC to an executive audience

One interesting thing is that when there is no external driver like regulatory compliance, deploying a Security Automation and Orchestration solution is often determined by maturity. Most organizations don’t realize that they will be unable to cope with the volume of alerts and the resulting alert fatigue until they have deployed a SIEM and a full advanced threat detection architecture.

The common misconception is that the SIEM can help to reduce the number of incoming alerts by applying correlation rules. This not entirely untrue, but correlation rules will only reduce a small percentage. They are essentially signature based. You need to know in advance what you want to correlate, and adding a correlation rule to cover all and every incoming alert is not a trivial task. Even with correlation rules, additional work will be required to qualify an incident. Gathering additional IoC’s, incident observables and context is still a very manual process. Lastly, detection is only one part of the entire incident response process – notifying stakeholders, gathering forensic evidence and threat containment will also have to be done manually. These are the areas where SAO solutions provide the greatest ROI – as a force multiplier.

Automate or Die Without Breaking Your Internet

Threat actors are increasingly adopting security automation and machine learning – security teams will have to follow suit, or risk falling behind.

Many organizations still conduct incident response based on manual processes. Many playbooks that we have seen in our customer base, for example, hand off to other stakeholders within the organization to wait for additional forensic data, and to execute remediation and containment actions.

While this may seem like good practice to avoid inadvertent negative consequences such as accidentally shutting down critical systems or locking out innocent users, it also means that many attacks are not contained in a sufficiently short time to avoid the worst of their consequences.

Manual Processes Cannot Compete with Automation

Reports are mounting about threat actors and hackers leveraging security automation and machine learning to increase the scale and volume, as well as the velocity of attacks. The implications for organizations should be cause for concern, considering that we have been challenged to effectively respond to less sophisticated attacks in the past.

Ransomware is a case in point. In its most simple form, a ransomware attack does not require the full cyber kill chain to be successful. A user receives an email attachment, executes it, the data is encrypted and the damage is done. At that point, incident response turns into disaster recovery.

Automated attacks have been with us for a long time. Worms and Autorooters have been around since the beginning of hacking, with WannaCry and its worming capability only the most recent example. But these have only automated some aspects of the attack, still permitting timely and successful threat containment further along the kill chain.

Threat actors have also leveraged automated command and control infrastructure for many years. DDoS Zombie Botnets, for example, are almost fully automated. To sum it up, the bad guys have automated, the defenders have not. Manual processes cannot compete with automation.

With the increase in the adoption of automation and machine learning by cyber criminals, enterprises will find that they will have to automate as well. The future mantra will be “Automate or Die”.

Making the Cure More Palatable Than the Disease

But automating containment actions is still a challenging topic. Here at DFLabs we still encounter a lot of resistance to the idea by our customers. Security teams understand that the escalating sophistication and velocity of cyber-attacks means that they must become more agile to rapidly respond to cyber incidents. But the risk of detrimentally impacting operations means that they are reluctant to do so, and rarely have the political backing and clout even if they want to.

Security teams will find themselves having to rationalize the automation of incident response to other stakeholders in their organization more and more in the future. This will require being able to build a business case to justify the risk of automating containment. They will have to explain why the cure is not worse than the disease.

There are three questions that are decisive in evaluating whether to automate containment actions:

  1. How reliable are the detection and identification?
  2. What is the potential detrimental impact if the automation goes wrong?
  3. What is the potential risk if this is not automated?

Our approach at DFLabs to this is to carefully evaluate what to automate, and how to do this safely. We support organizations in selectively applying automation through our R3 Rapid Response Runbooks. Incident Responders can apply dual-mode actions that combine manual, semi-automated and fully automated steps to provide granular control over what is automated. R3 Runbooks can also include conditional statements that apply full automation when it is safe to do so but request that a human vet’s the decision in critical environments or where it may have a detrimental impact on operational integrity.

We just released a whitepaper, “Automate or Die, without Dying”, by our Vice President of Product Evangelism and former Gartner analyst, Oliver Rochford, that discusses best practices to safely approach automation. Download the whitepaper here for an in-depth discussion on this controversial and challenging, but important topic.

Integrating Lessons Learned into Incident Response

Let me start by saying that total prevention is not attainable with today’s technology. Whether through negligence or ignorance, any data stored on a network is subject to unauthorized access by 3rd parties. Instead, we must combine Prevention with Detect and Respond. We know we are going to get breached, so we must focus on the how we deal with that.

One significant activity that can improve cyber incident response and enable the timely mitigation of threats is the transfer of knowledge after an incident as part of a formalized “Lessons Learned” phase of the incident response life cycle. Integrating successful processes and procedures from previously successful incident response activities can play a critical role in determining whether a business will suffer in terms of operational integrity, reputation and legal liability. A publicized security breach will lower customer confidence in the services offered by an organization as well as call into question the safety of their sensitive 3rd party information. This impacts a business credibility and translates directly into lost revenue.

In regulated industries, increased regulatory scrutiny is an additional consequence of a breach. This involves evaluating if the tools and procedures used in responding to security threats were sufficient. Integrating lessons learned into existing and future incident response playbooks ensures that the proper technologies and processes are deployed, and avoids accusations of gross negligence, expensive and time-consuming investigations and regulatory demands.

Procedural improvements can be incorporated into incident workflows via incident playbooks and ensure that all stages of the incident response process have been acknowledged and addressed. It also ensures that required security measures and procedures are documented and relevant stakeholders informed of their roles in case of an incident.

This process can be augmented through machine learning. Applying machine learning to this problem requires that all relevant data associated with incidents are analyzed and automatically applied to future incidents. DFLabs recently released DF-ARK machine learning capability to do precisely this. Our patent-pending Automated Responder Knowledge (DF-ARK) module applies machine learning to historical responses to threats and recommends relevant runbooks and paths of action to manage and mitigate them. DF-ARK requires sufficient training data – it begins with no knowledge, but learns from the experience and actions of your security team, becoming more effective over time. DF-ARK implements supervised case-based reasoning machine learning.

Figure 1DFLabs IncMan Automated Responder Knowledge

It also involves combining automated workflows and manual procedures to keep a human in the loop. This can be constantly improved by applying new observations and data, to fine tune existing methods and procedures identified in the lessons learned phase.

IncMan offers the R3 Rapid Response Runbook engine and Dual Mode playbooks to facilitate this. R3 Runbooks are created using a visual editor and support granular, stateful and conditional workflows to orchestrate and automate incident response activities such as incident triage, stakeholder notification, data and context enrichment and threat containment. Dual Mode Playbooks support manual, semi-automated and automated actions, meaning that users can automate the action without automating the decision.

Adding all of this together, here are 5 best practices for increasing the effectiveness of incident response via lessons learned:

  1. Encourage feedback from responders at every level. First, second and third line SOC operators and incident handlers each have a unique perspective that must be incorporated into future response playbooks.
  2. Review all relevant documentation to ensure compliance. This includes organizational policies or regulatory mandates to ensure any disparities are addressed in future playbooks.
  3. Chronicle any unanticipated or unusual events to extend procedures to mitigate similar occurrences in the future
  4. Annotate enhancements to existing processes that were identified during the incident response cycle.
  5. Designate a business unit or individual to be responsible for making necessary changes to existing playbooks, processes or procedures and to distribute these to stakeholders.

Capitalizing on lessons learned during incident response provides immediate and long-term benefits that contribute crucial time savings necessary to successfully mitigate future threats. Deploying a platform designed to facilitate the rapid inclusion of identified improvements to the incident workflow, such as DFLabs’ IncMan, can not only reduce the time it takes to fully investigate an incident but also reduces the overheads required to do so. If you want more information please contact us at DFLabs for a no obligation demonstration of exactly how we can improve your response time, workflows and remediation activities.

Demolishing the Ivory Tower – Collaboration and Communication in Incident Response

A collaborative environment between IT and security groups is critical. The number of cyber security incidents currently impacting networks and customers is increasing exponentially and mitigating security incidents and risks is more complex than ever before. Timely and effective communication are keys to improved collaboration between all parties involved in the cyber incident response process. One of the simplest and most effective methods to improve communication between all relevant IT and security groups is to deploy a common, shared platform where stakeholders can review and analyze incidents across the entire cyber landscape. A cross-departmental platform enables them to focus on correlating cyber incidents and risks with contextual information relevant to their role and responsibilities plays a significant part in organizational success in this regard.

Incorporating knowledge transfer between disparate business entities often separated both geographically and functionally is essential to facilitate a better understanding of the current IT and security challenges. The preferred method to provide this collaborative environment is via electronic based communication mediums and devices. To tie all of these channels together, an organization should consider deploying a cyber incident response platform, and the platform must be able to integrate these technologies, be it SMS, email or other messaging medium, to cover the broadest range of communication channels to transmit critical information to stake holders.

Another successful strategy that focuses on effectively communicating timely, critical information to relevant stakeholders is via the creation of an incident notification group. IncMan supports the creation of groups of Watchers that are appraised of incidents and activities automatically via SMS, email or an integrated communications system. A Watcher group can ensure that information is properly communicated to the appropriate stakeholder(s). This provides differing stakeholders with the capability of monitoring incidents that may impact business continuity. Additionally, IncMan has integrated communications capabilities comply with industry best practices which recommend having a separate, secure and hardened communications channel if email or other internal communication channels are compromised. This independent messaging capability also provides additional benefits such as asymmetric encryption capabilities.

Leveraging a dedicated solution that can orchestrate the communications to stakeholders standardizes the process of cyber incident response and mitigation and is the key to ensuring a more effective response. If you would like more information or a free no obligation demonstration of how IncMan from DFLabs can more effectively automate and orchestrate your incidents please contact us at [email protected]

 

Remove the Menial Tasks Through Automation

Alert fatigue is the desensitization when overwhelmed with too much information. The constant repetition and sheer volume of redundant information are painful and arduous but sadly often constitutes the daily reality for many people working in cyber security. Mike Fowler (DFLabs’ VP of Professional Services) discusses several best practices to help with some of the challenges involved in this in his recent whitepaper “DFLabs as a Force Multiplier in Incident Response”. I am going to discuss another one, but looking at it from a slightly different angle.

Imagine the scenario where we have tens of thousands of alerts. Visualize these as Jigsaw pieces with a multitude of different shapes, sizes and colors and the additional dimension of different states. We have alerts from a firewall, anomalies from behavioral analytics, authentication attempts, data source retrieval attempts or policy violations. Now, there are a lot of ways to shift through this information, for example by using a SIEM’s to correlate the data and reduce the some of the alerts. The SIEM could identify and cross-reference the colors and shapes of the jigsaw pieces so to speak.

The next question once that I’ve got the all the pieces I need for the puzzle is how do I put this together? How do I complete the puzzle and unlock the picture?

The “what does the jigsaw picture?” question is something that will often puzzle the responders, pun intended. How do you prioritise and escalate incidents to the correct stakeholders? How do you apply the correct playbook for a specific scenario? How do you know which pieces of information to analyse to fit the jigsaw pieces together and make sure the puzzle looks correct?

Automation process can speed up putting that puzzle together, but making sure you automate the right things is just as critical. If skilled staff are running search queries that are menial, repetitive and require little cognitive skill to execute, you should ask yourself why they are performing these and not instead focused on analyzing the puzzle pieces to figure out how they fit together?

Remove the menial tasks. Allow automation to do the heavy lifting so your teams are not only empowered by the right information they need to successfully manage the response to an incident but also to give them more time to figure out the why, how and what of the threat.

We also welcome you to join us for a webinar hosted by Mike Fowler on this topic on the 6th of September.

Don’t Wait for the Next Breach – Simulate It

Over the past few months during the post-hoc analysis of WannaCry-Petya, we have spoken in great lengths about what should have been done during the incident. This is quite a tricky thing to do in a balanced way because we are all clever in hindsight. What hasn’t been spoken about enough is understanding more generally what we need to do when things go wrong.

This question isn’t as simple as it appears, as there are a lot of aspects to consider during an incident, and only a brief window to identify, contain and mitigate a threat. Let’s look at just a few of these:

Response times
This is often the greatest challenge but of utmost importance. The response is not only understanding the “how” and “why” of a threat but is also about putting the chain of events into action to make sure that the “what” doesn’t spiral out of control.

Creating an effective playbook
A playbook should be a guide on how your incident response plan must be executed. Orchestration platforms contain these playbooks/runbooks. Also, note that these are not generic plug and forget policies. They need to be optimized and mapped to your business and regulatory requirements and are often unique to your organization. Otherwise, the incident will be controlled by an incorrect playbook.

Skills and tool availability
Do you have the correct skills and tools available and are you able to leverage these. Do you understand where your security gaps are and do you know how to mitigate them?

On paper, incident response always works. Right until the moment of truth during a data breach that shows that it doesn’t. To avoid relying on theory only, it is best to run breach simulations and simulate some of the attacks that may affect your organization to find out if your processes and playbooks also work under more realistic conditions.

We’re always playing catch‒up for many reasons—new technologies, new vulnerabilities, and new threats. Software and hardware may possibly always be at the mercy of hackers, criminal actors and other threat actors, so prevention alone is futile. We have to become more resilient and better at dealing with the aftermath of an attack.

The key summary for me is this: How do you respond? Can the response be improved? Utilize the lessons learned in breach simulations to understand how you make the response better than before.

3 Best Practices for Incident Categorization to Support Key Performance Indicators

The DNA sequence for each human is 99.5% similar to any other human. Yet when it comes to incident response and the manner in which individual analysts may interpret the details of a given scenario, our near-total similarity seems to all but vanish. Where one analyst might characterize an incident as the result of a successful social engineering attack, another may instead identify it as a generic malware infection. Similarly, a service outage may be labeled as a denial of service by some, while others will choose to attribute the root cause to an improper procedure carried out by a systems administrator. Root cause and impact, or incident outcome, are just a couple of the many considerations that, unless properly accounted for in a case management process, will otherwise play havoc on a security team’s reporting metrics.

Poor Key Performance Indicators can blind decision makers

What is the impact of poor KPI’s? All too often the end result leads to equally poor strategic decisions. Money and effort may be assigned to the wrong measures, for example into more ineffective prevention controls instead of improved response capability. In a worst case scenario, poor KPI’s can blind decision makers to the most pertinent security issues of their enterprise, and the necessary funding for additional security may be withheld altogether.

Three best practices are required to address this all too common problem of attaining accurate reporting:

  1. A coherent incident management process is necessary in order to properly categorize incident activity. Its definitions must be clear, taking into account outliers, clarifying how root causes and impacts are to be tracked, and providing a workflow to assist analysts in accurately and consistently determining incident categorization.
  2. The process must be enforced to guarantee uniform results in support of coherent KPI’s. Training, quality assurance, and reinforcement are all necessary to ensure total stakeholder buy-in.
  3.  Security teams must have the technologies to support effective incident response and proper categorization of incidents.

There are several ways that the IncMan platform supports the three best practices:

First, IncMan provides a platform to act as the foundation for an incident management program. It provides customizable incident forms allowing for complete tailoring to an organization and the details it must collect in support of its unique reporting requirements. Custom fields specific to distinct incident types allow for detailed data collection and categorization. These custom fields can be coupled with common attributes to track specific data, thereby providing a high level of flexibility for security teams in maintaining absolute reporting consistency across the team’s individual members.

Next, playbooks can be associated with specific incident types, providing step-by-step instructions for specialized incident response activities. Playbooks enforce consistency and can further reinforce reporting requirements. However, playbooks are not completely static, and while they certainly provide structure, IncMan’s playbooks also offer the ability to improvise, add, remove or substitute actions on the fly.

The platform’s Knowledge Base offers a repository for reference material to further supplement playbook instructions. Information collection requirements defined within playbook steps can be linked to Knowledge Base references, arming analysts with added information, for example with standard operating procedures pertaining to individual enterprise security tools, or checklists for applicable industry reporting requirements.

IncMan also includes Automated Responder Knowledge (ARK), a machine learning driven approach that learns from past incidents and the response to them, to suggest suitable playbooks for new or related incident types. This is not only useful for helping to identify specific campaigns and otherwise connected incident activity but can also highlight historical cases that can serve as examples for new or novice analysts.

Finally, the platform’s API and KPI export capabilities enable the extraction of raw incident data, allowing for data mining of valuable reporting information using external analytics tools. This information can then be used to paint a much clearer picture of an enterprise’s security posture and allow for fully-informed strategic decision-making.

Collectively, the IncMan features detailed above empower an organization with the means to support consistency in incident categorization, response, and reporting. For more information, please visit us at https://www.dflabs.com

Slaying the Hydra – Incident Response and Advanced Targeted Attacks

In incident response, protecting against a targeted attack is like slaying the hydra. For those not familiar with what a hydra is, it is a multi-headed serpent from Greek mythology, that grows two new heads for every head you chop off. A determined attacker will try again and again until they succeed, targeting different attack vectors and using a variety of tactics, techniques, and procedures.

The Snowden and Shadowbroker leaks really drove this home, giving partial insight into the toolkit of nation state actors. What really stuck out to me was the sheer variety of utilities, frameworks, and techniques to infiltrate and gain persistence in a target. Without the leak, would it be possible to reliably determine that all of those hacking tools belonged to a single entity? Would a large organization with thousands of alerts and hundreds of incidents every day be able to identify that these different attacks belonged to a single, concerted effort to breach their defenses, or would they come to the conclusion that these were all separate, unrelated attempts?

Our colleagues in the Threat Intelligence and Forensic analysis industries have a much better chance to correlate these tools and their footprint in the wild – they may discover that some of these tools share a command and control infrastructure for example. A few did have at least an outline of the threat actor, but judging by the spate of advisories and reports that were released after the leaks, not very many actually appear to have achieved this to a great degree. The majority were only able to piece the puzzle together once equipped with a concise list of Indicators of Compromise (IoC) and TTP’s to begin hunting with.

“How does this affect me? We are not important enough to attract the attention of a nation state actor”

Some readers may now be thinking, “How does this affect me? We are not important enough to attract the attention of a nation state actor”. I would urge caution in placing too much faith in that belief.

On the one hand, for businesses in some countries the risk of economic espionage by-nation state hacking has decreased. As I wrote on Securityweek in July, China has signed agreements with the USA, Canada, Australia, Germany and the UK limiting hacking for the purpose of stealing trade secrets and economic espionage. However, this does not affect hacking for national security purposes, and it will have little impact on privately conducted hacking. These are also bilateral agreements, and none exist in other nations, for example, Russia or North Korea. For militarily and economically weaker nation states, offensive cyber security is a cheap, asymmetric method of gaining a competitive or strategic advantage. As we have seen, offensive cyber activity can target civilian entities for political rather than economic reasons, and hackers are increasingly targeting the weakest link in the supply chain. This means that the potential probability of being targeted is today based more on your customer, partner, and supply chain network, and not just on what your organization does in detail. Security through obscurity has never been a true replacement for actual security, but it has lost its effectiveness as targeted attacks have moved beyond only focusing on the most prominent and obvious victims. It has become much easier to suffer from collateral damage.

Cyber criminals are becoming more organized and professional

On the other hand, cyber criminals are becoming more organized and professional, with individual threat actors selling their services to a wide customer base. A single small group of hackers like LulzSec may have a limited toolbox and selection of TTP’s, but professional cybercrime groups have access to numerous hackers, supporting services and purpose-built solutions. If they are targeting an organization directly and are persistent and not opportunistic, it will be as difficult to discern that a single concerted attack by one determined threat actor is taking place.

What this means in practical reality for any organization that may become the target of a sophisticated threat actor, is that you have to be on constant alert. Identifying, responding to and containing a threat is not a process to be stepped through with a final resolution step – instead, cyber security incident response is an ongoing, continuous and cyclical process. Advanced and persistent attacks unfold in stages and waves, and like a war consist of a series of skirmishes and battles that continue until one side loses the will to carry on the conflict or succeeds in their objectives. Like trying to slay the hydra, each incident that you resolve means that the attacker will change their approach and that the next attempt may be more difficult to spot. Two new heads have grown instead of one.

To tackle this requires that we cultivate a perpetual state of alertness in our SOC and CSIRT

To tackle this requires that we cultivate a perpetual state of alertness in our SOC and CSIRT – but we must do this without creating a perpetual state of alarm. The former means that your team of analysts is always aware and alert, looking at individual incidents as potentially just one hostile act of many that together could constitute a concerted effort to exfiltrate your most valuable data, disrupt your operational capacity, or abuse your organization to do this to your partners or customers. In the latter case, your analysts will suffer from alert fatigue, a lack of true visibility of threats, and a lack of energy and time to be able to see the bigger picture.
The hydra will have too many heads to defeat.

In the Greek legend of Heracles, the titular hero eventually defeats the Hydra by cauterizing each decapitated stump with fire to prevent any new heads from forming. Treating an incident in isolation is the Security Incident Response equivalent of chopping off the head of the hydra without burning the stump. Applied to our problem, burning the stump means that we have to conduct the response to each incident thoroughly and effectively, and continue the process well beyond containment.

We must invest more time in hunting and investigating, and we have to correlate and analyze the relationship between disparate incidents. We must use threat intelligence more strategically to derive situational awareness, and not just tactically as a machine-readable list of IoC’s. This also requires gathering sufficient forensic evidence and context data about an incident and related assets and entities during the incident response process, so that we can conduct post event analysis and continuous threat assessment after containment and mitigation have been carried out. This way we can better anticipate the level of threat that we are exposed to, and make more informed decisions about where to focus our resources, add mitigating controls and improve our defenses. In Incident Response “burning the stump” means making it more difficult for threat actors to succeed in the future by presenting them with a hardened attack surface, reducing their reside time in our infrastructure, and reducing the time we need to discover and contain them. To do this we need to learn from every incident we manage.

Interested to know what 412 IT professionals and cyber security professionals think on the latest Security Analytics and Operations trends?

A Weekend in Incident Response #35: The Most Common Cyber security Threats Today

Companies across different industries around the globe, along with government institutions, cite cyber attacks as one of the biggest security threats to their existence. As a matter of fact, in a recent Forbes survey of over 700 companies from 79 countries, 88 percent of respondents said that they are “extremely concerned” or “concerned” by the risk of getting attacked by hackers.

This fact is a clear indication that organizations have to ramp up efforts for enhancing their cyber resilience, but to do that successfully and in the most effective manner, they need to have a clear understanding of where the biggest cyber threats come from nowadays so that they can shape their cyber defenses accordingly. We take a look at the most common cybersecurity threats today, ranging from internal threats, cyber criminals looking for financial gains, and nation states.

Internal Threats

When talking about cyber security, some of the first things that usually come to mind are freelance hackers and state-sponsored attacks between hostile nations. But, many cyber security incidents actually come from within organizations, or to be more specific, from their own employees.

Pretty much all experts agree that employees are some of the weakest links in the cyber defense of every organization, in part due to low cyber security awareness, and sometimes due to criminal intent.

Employees often put their companies at risk of getting hacked without meaning to, by opening phishing emails or sharing confidential files through insecure channels, which is why organizations should make sure their staff knows the basics of cyber security and how to avoid the common cyber scams and protect data.

Connected Devices

With so many devices connected to the Internet nowadays, including video cameras, smart phones, tablets, sensors, POS terminals, medical devices, printers, scanners, among others, organizations are at an increased risk of falling victim of a data breach. The Internet of Things is a real and ever-increasing cyber threat to businesses and institutions, deteriorating their vulnerability to cyber attacks by adding more endpoints that hackers can use to gain access to networks, and by making it easier for hackers to spread malicious software throughout networks at a faster rate.

The Internet of Things is one of the factors that make DDoS attacks more possible and more easily conducted, and these types of attacks can have a significant and long-lasting impact on organizations, both in terms of financial losses and reputation damage.

Nation-State Attacks

Private entities and government institutions that are part of the critical infrastructure in their countries are under a constant threat of different types of attacks by hostile nations. As the number of channels and methods that stand at the disposal of hackers aiming to gain access to computer networks grows, organizations in the public and private sector are facing a growing risk of cyber attacks sponsored by nation-states that might have an interest in damaging the critical infrastructure of other countries, hurting their economies, obtaining top-secret information, or getting the upper hand in diplomatic disputes.

Most commonly, nation-state-sponsored cyber attacks use malware, such as ransomware and spyware, to access computer networks of organizations, as a means of gaining control over certain aspects of the critical infrastructure of another country.

No matter what types of attacks are common today, the number and level of sophistication of cyber threats to organizations are certainly going to grow in the future, which is why they have to constantly update and adjust their cyber defenses accordingly.