DFLabs 3rd Party Integrations vs the Market

A consistent feedback point we receive from our users is that their security technology stack is rapidly growing to keep pace with the evolving threat landscape. The days where it was sufficient to deploy a firewall, an intrusion prevention system, antivirus and an identity access management system are long gone. Enterprises are literally spoilt when it comes to selecting a wide variety of different security technologies – User Entity Behaviour Analytics (UEBA), Network Traffic Analysis (NTA), Endpoint Detection and Response (EDR), Breach and Attack Simulation (BAS), are just a few of the emerging technologies available to security teams, and this list does not even include mobile, cloud and IoT offerings that are required to secure the expanding attack surface. This can seem daunting to many organizations, not just the budgetary impact, but also the fact that every one of these technologies requires the expertise and knowledge to effectively operate them.

As a vendor offering a Security Orchestration, Automation, and Response platform that is designed to integrate with, and orchestrate these different solutions, we often have to make difficult choices as to what we integrate with, and how deeply we integrate. Our focus is on market-leading security technologies, technologies we identify as emerging but of growing importance and effectiveness, and of course also based on what our customers have deployed and ask us to integrate.

There is a trend in our market to exaggerate the amount of 3rd party integrations. Marketing collateral often cites hundreds of different 3rd party tools, yet rarely differentiates between the depth of the integration, whether these are truly bidirectional and whether they certified by the 3rd party. As an example, any solution that supports Syslog can claim to support hundreds of 3rd party technologies. It’s an open standard and many solutions can forward syslog message to a syslog collector. But that does not necessarily mean that the solution also has out of the box parsers to normalize the messages, or that there are automation actions, playbooks or report templates available that can parse and use their content.

Reducing this to a pure quantitative marketing message also entirely misses the point. Organization only really care about the technologies they have deployed, or are planning to acquire. The quantity here is entirely misleading. More importantly, users are not stupid. They will see through this charade at the latest when conducting a proof of concept. And at that point any vendor following this approach has some uncomfortable explaining to do. No relationship, personal or business, gets off to a good start based on fudging the truth.

I have rarely seen an RFP that was based purely on a quantitative measure of supported 3rd party integrations, so it is baffling why marketers believe this to have any impact.

At DFLabs we have decided to go a different way. We want to clearly state which of our integrations are bidirectional as opposed to only based on data ingestion, and which integrations are certified, or compatibility tested by our integration partners.

While at first glance this appears to put as at a disadvantage, we trust in the intelligence of our customers. We hope that it will help them to make better-informed decisions and they give us credit for being honest and realistic.

GDPR & Breach Notification – Finally We Will Get Some European Breach Data

The EU GDPR will be enforced from May 25th next year. GDPR mandates a wide variety of requirements on how data processors must manage customer and 3rd party data. Although it is not primarily focused on cybersecurity, it does contain vague requirements on security monitoring. This includes that data processors must establish a breach notification procedure, that include incident identification systems, and must be able to demonstrate that they have established an incident response plan.

GDPR and Data Breach Notification

Further, there is a requirement to be able to notify the supervisory authority of a data breach within 72 hours of becoming aware of a data breach or face a stiff financial penalty. This last requirement is of special interest beyond the impact on data processors. Because it means that for the first time, we will begin having reliable data on European breaches.

Historically, European companies have had no external requirement to be transparent about being affected by a breach. This has had the consequence that we have not had good data or an awareness of how well or badly European organizations are doing when it comes to preventing or responding to security breaches.

I am sure that if like myself, you have worked in forensics and incident response in Europe over the years, you are aware of far more breaches that are publicly disclosed. The only information available is when a breach is disclosed due to the press and law enforcement, or the impact is so great that it can’t be ignored. We also have some anonymized reports from some vendors and MSSP’s, but these are really no more than samples. While not without benefit, these also do not provide a reliable indicator, as the samples are not necessarily statistically representative This provides a false sense of how European organizations are faring compared to other regions and presents a skewed image of European security in general.

The true state of European security is an unknown and has been difficult to quantify. I have seen German articles for example that have claimed that German Security is better than the rest of the world because there are less known breaches. The absence of evidence is of course not evidence of absence. Something that has not been quantified cannot be said to be good or bad. More importantly, if you do not measure something, it cannot be improved.

It will be interesting to see whether GDPR will force European organizations to place more focus on Incident Detection and Response, and give us insight into the true state of European security.

When is Security Automation and Orchestration a Must-Have Technology? – Addressing Gartner’s SOAR Question

Last week, Anton Chuvakin from Gartner announced that Augusto Barros and himself are planning to conduct research in Q4 2017 on the topic of Security Orchestration, Automation and Response (SOAR), or Security Automation and Orchestration, depending on which analyst firms’ market designation you follow. At DFLabs we are very excited that Gartner is finally showing our market space some love and will be helping end users to better assess and differentiate SAO offerings.

Anton provided many questions that he wanted SAO vendors to prepare for. The questions immediately piqued our interest, with one question, in particular, standing out to us.

1.When is SOAR a MUST have technology? What has to be true about the organization to truly require SOAR? Why your best customer acquired the tools?

Anton also said that he had one main problem with Security Automation and Orchestration. In his own words, “For now, my main problem with SOAR (however you call those security orchestration and automation tools…if you say SOAPA or SAO we won’t hate you much) is that I have never (NEVER!) met anybody who thought “my SOAR is a MUST HAVE.”

The question is not entirely unwarranted. During my own time at Gartner covering the SOAR space, I spoke to many clients who were seeking an SAO solution without knowing that they were. Typical comments were, “I have too many alerts and false positives to be able to deal with them all”, or “We are struggling to hire enough skilled people to be able to respond to all of the incidents that we have to manage”. Another common comment was, “I am struggling to report operational performance to my executives?”. Often, these comments were followed by the question, “Do you know of any technology that can help?”.

Typically, these organizations had a mature security monitoring program, usually built around a SIEM. They often had critical drivers, such as regulatory requirements, or held sensitive customer data. We hear the same buying drivers from our own customer base.

To sum up the most common drivers for someone asking about Security Automation and Orchestration:

  1.  A high volume of alerts and incidents and the challenge in managing them
  2.  A large portfolio of diverse 3rd party security detection products resulting in a large volume of alerts
  3.  Regulatory mandates for incident response and breach notification
  4.  An overstretched security operations team
  5.  Reporting risk and the operational performance of the CSIRT and SOC to an executive audience

One interesting thing is that when there is no external driver like regulatory compliance, deploying a Security Automation and Orchestration solution is often determined by maturity. Most organizations don’t realize that they will be unable to cope with the volume of alerts and the resulting alert fatigue until they have deployed a SIEM and a full advanced threat detection architecture.

The common misconception is that the SIEM can help to reduce the number of incoming alerts by applying correlation rules. This not entirely untrue, but correlation rules will only reduce a small percentage. They are essentially signature based. You need to know in advance what you want to correlate, and adding a correlation rule to cover all and every incoming alert is not a trivial task. Even with correlation rules, additional work will be required to qualify an incident. Gathering additional IoC’s, incident observables and context is still a very manual process. Lastly, detection is only one part of the entire incident response process – notifying stakeholders, gathering forensic evidence and threat containment will also have to be done manually. These are the areas where SAO solutions provide the greatest ROI – as a force multiplier.

Slaying the Hydra – Incident Response and Advanced Targeted Attacks

In incident response, protecting against a targeted attack is like slaying the hydra. For those not familiar with what a hydra is, it is a multi-headed serpent from Greek mythology, that grows two new heads for every head you chop off. A determined attacker will try again and again until they succeed, targeting different attack vectors and using a variety of tactics, techniques, and procedures.

The Snowden and Shadowbroker leaks really drove this home, giving partial insight into the toolkit of nation state actors. What really stuck out to me was the sheer variety of utilities, frameworks, and techniques to infiltrate and gain persistence in a target. Without the leak, would it be possible to reliably determine that all of those hacking tools belonged to a single entity? Would a large organization with thousands of alerts and hundreds of incidents every day be able to identify that these different attacks belonged to a single, concerted effort to breach their defenses, or would they come to the conclusion that these were all separate, unrelated attempts?

Our colleagues in the Threat Intelligence and Forensic analysis industries have a much better chance to correlate these tools and their footprint in the wild – they may discover that some of these tools share a command and control infrastructure for example. A few did have at least an outline of the threat actor, but judging by the spate of advisories and reports that were released after the leaks, not very many actually appear to have achieved this to a great degree. The majority were only able to piece the puzzle together once equipped with a concise list of Indicators of Compromise (IoC) and TTP’s to begin hunting with.

“How does this affect me? We are not important enough to attract the attention of a nation state actor”

Some readers may now be thinking, “How does this affect me? We are not important enough to attract the attention of a nation state actor”. I would urge caution in placing too much faith in that belief.

On the one hand, for businesses in some countries the risk of economic espionage by-nation state hacking has decreased. As I wrote on Securityweek in July, China has signed agreements with the USA, Canada, Australia, Germany and the UK limiting hacking for the purpose of stealing trade secrets and economic espionage. However, this does not affect hacking for national security purposes, and it will have little impact on privately conducted hacking. These are also bilateral agreements, and none exist in other nations, for example, Russia or North Korea. For militarily and economically weaker nation states, offensive cyber security is a cheap, asymmetric method of gaining a competitive or strategic advantage. As we have seen, offensive cyber activity can target civilian entities for political rather than economic reasons, and hackers are increasingly targeting the weakest link in the supply chain. This means that the potential probability of being targeted is today based more on your customer, partner, and supply chain network, and not just on what your organization does in detail. Security through obscurity has never been a true replacement for actual security, but it has lost its effectiveness as targeted attacks have moved beyond only focusing on the most prominent and obvious victims. It has become much easier to suffer from collateral damage.

Cyber criminals are becoming more organized and professional

On the other hand, cyber criminals are becoming more organized and professional, with individual threat actors selling their services to a wide customer base. A single small group of hackers like LulzSec may have a limited toolbox and selection of TTP’s, but professional cybercrime groups have access to numerous hackers, supporting services and purpose-built solutions. If they are targeting an organization directly and are persistent and not opportunistic, it will be as difficult to discern that a single concerted attack by one determined threat actor is taking place.

What this means in practical reality for any organization that may become the target of a sophisticated threat actor, is that you have to be on constant alert. Identifying, responding to and containing a threat is not a process to be stepped through with a final resolution step – instead, cyber security incident response is an ongoing, continuous and cyclical process. Advanced and persistent attacks unfold in stages and waves, and like a war consist of a series of skirmishes and battles that continue until one side loses the will to carry on the conflict or succeeds in their objectives. Like trying to slay the hydra, each incident that you resolve means that the attacker will change their approach and that the next attempt may be more difficult to spot. Two new heads have grown instead of one.

To tackle this requires that we cultivate a perpetual state of alertness in our SOC and CSIRT

To tackle this requires that we cultivate a perpetual state of alertness in our SOC and CSIRT – but we must do this without creating a perpetual state of alarm. The former means that your team of analysts is always aware and alert, looking at individual incidents as potentially just one hostile act of many that together could constitute a concerted effort to exfiltrate your most valuable data, disrupt your operational capacity, or abuse your organization to do this to your partners or customers. In the latter case, your analysts will suffer from alert fatigue, a lack of true visibility of threats, and a lack of energy and time to be able to see the bigger picture.
The hydra will have too many heads to defeat.

In the Greek legend of Heracles, the titular hero eventually defeats the Hydra by cauterizing each decapitated stump with fire to prevent any new heads from forming. Treating an incident in isolation is the Security Incident Response equivalent of chopping off the head of the hydra without burning the stump. Applied to our problem, burning the stump means that we have to conduct the response to each incident thoroughly and effectively, and continue the process well beyond containment.

We must invest more time in hunting and investigating, and we have to correlate and analyze the relationship between disparate incidents. We must use threat intelligence more strategically to derive situational awareness, and not just tactically as a machine-readable list of IoC’s. This also requires gathering sufficient forensic evidence and context data about an incident and related assets and entities during the incident response process, so that we can conduct post event analysis and continuous threat assessment after containment and mitigation have been carried out. This way we can better anticipate the level of threat that we are exposed to, and make more informed decisions about where to focus our resources, add mitigating controls and improve our defenses. In Incident Response “burning the stump” means making it more difficult for threat actors to succeed in the future by presenting them with a hardened attack surface, reducing their reside time in our infrastructure, and reducing the time we need to discover and contain them. To do this we need to learn from every incident we manage.