The recent SANS 2018 Security Operations Center (SOC) Survey, which was designed to identify the areas of SOCs that need improvement to reach consistent levels of success, revealed several significant deficiencies. These challenges can be overcome with several proven best practices. This blog post will focus on the top four identified SOC deficiencies, the core causes behind them, and the actions that should be taken to the end of improving them.
Lack of automation/orchestration, integrated toolsets and processes/playbooks
Most SOCs fall behind with automation and orchestration mainly because they aren’t aware of the processes that should be automated. This issue can be fixed by performing employee interviews and conducting risk and security assessments.
The employees are the first line of defense in an organization. Those processes that are repeatable can be easily discovered by interviewing employees and find out what tasks they are responsible for.
Interviewing employees to find out what tasks they are responsible for can identify repeatable processes. These processes, such as evidence gathering during an incident (IP/URL reputation, information, etc.), are time-consuming but can be easily automated with SOAR technology. By automating time-consuming processes, employees can better utilize their time with more urgent matters which will benefit the overall organization.
Performing risk assessments and other security-related tasks will naturally lead to the strengthening of a security program by identifying assets (asset management), identifying vulnerabilities (vulnerability management), providing metrics to monitor and improve (security metrics program), and highlighting areas to be included in a security monitoring program. Identifying these areas of an organization’s security landscape means additional repeatable processes will be exposed, and this not only provides automation opportunities but also aids in overcoming the other deficiencies today’s SOCs are struggling with.
Additionally, the lack of integration between security tools can be attributed to the security vendor space becoming more and more saturated and organizations are forced to layer their security defenses to protect from multi-threaded attacks. This has left security teams with a vague knowledge of their product lines and what they can do in concert with each other. However, there is no easy fix here – some alternatives may include performing Proof of Concept (POC) engagements and encouraging security vendors to “lean in” and gain a better understanding of the organization’s environment. By doing so, these organizations can test drive the product, identify possible gaps, and correct them before deploying it to the environment.
Finally, SOCs that fall behind in terms of processes and playbooks typically have a low maturity security program. In these situations, working with a managed security service provider or managed detection and response service seem to be good alternatives.
Asset discovery and inventory tool satisfaction was lowest of all SOC technologies
The main reason for this finding is simple: asset inventory and management is hard. Even with an asset management or inventory system in place, the technology staff will be left doing the heavy lifting. The initial upfront investment of time and energy is what usually causes organizations to become dissatisfied. In a world of instant gratification, we expect that if we spend a certain amount of money on any product that it should accelerate us to our end goal. But unfortunately reality sets in and we are still faced with dynamic business landscapes and a rapidly evolving technology curve which forces us to roll up our sleeves and get our hands dirty.
Any asset management program requires planning and a full understanding of the environment. Without these crucial steps, any tool that is purchased will fail to meet your requirements. As mentioned earlier, perform risk and security assessment against your environment. A lot of security assessments, particularly vulnerability assessments, have a discovery phase. This phase will produce a list of assets as well as their vulnerabilities which an organization can use as a jumping off point. And as always, keep in mind there is no single solution good enough for everyone. There will be some pain and heartache when standing up an asset management solution, but when done correctly will be worth it in the long run.
Despite the use of SIEM and big data tools, most event correlation is still manual
This seems counter-intuitive, but there’s a good explanation. When standing up a SIEM, it is not as simple as turning it on and pointing log sources towards it. Organizations should have a grasp of their log sources and the overall visibility they provide into the environment.
In order to do this correctly, an organization should perform a network audit. This will highlight where network taps should be located, what devices consistently speak to each other, and if there are any gaps or obstacles which must be resolved. Obstacles, for example, web proxies masking a true source, or short DHCP leases may prevent an investigator from locating a potential victim and limit an organization’s SIEM from conducting the proper correlation between events. Understanding where these gaps lie and the limitations a chosen SIEM product may have can help investigation teams better understand areas where manual correlation may still be necessary.
Effectiveness of SOC/NOC integration is low
This deficiency is a cultural problem, SOC teams have one agenda (detection and protection), while NOC teams have another (maintaining uptime and availability). These are usually at odds with each other, take for example the age-old conflict of least privilege. Network teams want to have the keys to the castle and be able to move freely through the environment, while SOC teams are focused on locking down the environment to better identify anomalies which may indicate malicious activity.
Meanwhile, to add to this misaligned agenda, both groups are usually under-resourced and overworked due to the lack of qualified candidates and the surmounting responsibilities these teams face when maintaining and securing a network. To bridge the gap, organizations will want to institute processes and procedures that outline rules of engagement between the teams. By creating rules of engagement, both departments know what their responsibilities are and the processes and procedures which are in place for their interactions leave little doubt as to how the partnership should function.
These Security Operations Centers (SOC) deficiencies in most organizations can be easily overcome with timely planning and with the right processes in place. A good option for those that lack appropriate resources or security program is to use a managed security service provider, or managed detection and response service.
What Organizations Need to Know and How to Utilize Vital Information Sharing to Reduce an Environment’s ATT&CK Surface
What is MITRE?
The MITRE Corporation is a non-profit organization which operates multiple federally funded research and development centers across the United States. Their mission is to help overcome problems which challenge the nation’s security and its overall stability. For the last 60 years, MITRE has helped to provide solutions to the complex problems faced by key government sectors such as the Department of Homeland Security and Cyber Counterintelligence. Their work in the cyber security sector has provided countless innovative solutions which stretch far beyond its government application.
What is the ATT&CK Matrix?
One of these innovative solutions is MITRE’s ATT&CK Matrix. The ATT&CK Matrix (Adversarial Tactics, Techniques, and Common Knowledge) is a carefully comprised knowledge base used to describe how adversaries penetrate networks and move laterally across them by escalating privileges, and often evade an organization’s defenses for extended periods of time.
The ATT&CK Matrix looks at these actions from the perspective of an adversary, the goals they may look to achieve, and the methods they may use to achieve them. These methods are broken down by techniques, tactics, and procedures (TTPs) observed during MITRE’s research as well as penetration testing and red team engagements. The information gathered during these engagements provides a model for network defenders to use to better categorize and understand post-exploitation activities.
The organization of the TTPs found within the ATT&CK Matrix may be familiar to most as they coincide with the later stages of the Lockheed Martin Cyber Kill Chain.
Why is the ATT&CK Matrix Important?
Cybersecurity is a “game of inches” and every inch covered has proved to be no small feat for network defenders. As adversaries evolve their tactics and techniques, the security community works tirelessly to evolve their detection and remediation methods to bring their organizations and the community one step closer to closing the gap faced when battling these elusive actors.
One of the detection and remediation methods that has gained a lot of momentum is Cyber Threat Intelligence (CTI). Since its conception, some of the mysticism surrounding its definition and applicable use has begun to reveal itself. However, as with any new school of thought, there is still a lot of knowledge to be gained and work to be done.
The traditional approach to CTI has proven to be a cumbersome process. The ways and means of collecting threat intelligence data is oftentimes delivered through thorough reporting efforts which can leave analysts scrambling to extract meaningful information, and in turn, they must also be able to apply this information in a manner that proves to be an effective means of defense.
Other unforeseen obstacles organizations face are the overwhelming number of indicators these reports produce. These indicators, more times than not, provide a little context and must be vetted before they can be consumed. This can be a daunting process which if not done correctly, can contaminate an organization’s intelligence data causing an even greater increase of false positives than are already being observed. To make matters worse, even if the above-mentioned obstacles are overcome, these indicators are constantly evolving creating stale data in their wake which must be continuously reviewed and re-prioritized.
Now that we have stated some of the obvious issues, what can be done about it? That is where the ATT&CK Matrix comes in. ATT&CK provides structure to this chaos by allowing analysts and network defenders to gather greater context around adversary groups, how they compare to other groups, and what TTPs they are using. This invaluable information will help organizations begin to gain value from their threat intelligence while remaining sane in the process.
How to Utilize the ATT&CK Matrix
While researching how to best utilize ATT&CK, I came across a beautifully written article by Katie Nickles, a lead cybersecurity engineer with MITRE Corporation. In her two-part article she describes the best methods of utilization of the ATT&CK Matrix, how it came about, and who contributes to its success. She also referenced an ideology from David Bianco, called the Pyramid of Pain. The original article was published in 2013, but still stands true today. In the article David outlines the value and priority of the different threat indicators organizations will encounter. He describes how these different indicators can be used to disrupt or completely dismantle an adversaries TTPs, finally giving security teams and network defenders the upper hand.
If you are one of many network defenders who are struggling to make your threat intelligence data work for you, or if you are not familiar with ATT&CK and the incredible work they are providing to the community, I highly recommend spending some time reading Katie’s article and exploring how ATT&CK can help close the gap and reduce your ATT&CK surface.
What is Threat Intelligence
Threat Intelligence has morphed from a catchy marketing buzzword to a highly sought-after tool, which when used correctly, can bring immense value to an organization. However, because it is in high demand and organizations are researching and adopting it in some form or another, the market has become flooded with products and services promising to provide “Threat Intelligence” to an organization. Unfortunately, in many cases, the “Threat Intelligence” provided is only one piece of a larger puzzle.
When working with Threat Intelligence it is easier to look at it as two separate concepts:
- Threat Data (aka Threat Feeds)
- Threat Context (aka Intelligence)
These concepts combined produce the relevant and actionable “Intelligence” organizations need to better align their security goals with their business’s long-term objectives.
Threat Data is raw data feeds which include artifacts such as malicious IPs or URLs which generally lack context regarding the why behind their motives or malicious behavior. Threat data alone cannot provide the intelligence necessary to make informed decisions regarding the security of our environments, but when paired with Threat Context we are given a clearer picture of its risk towards our organization.
Threat Context is more elusive and is usually where organizations fall short when implementing a Threat Intelligence program. To apply “context” an organization must have a clear goal of what they are trying to achieve by introducing a piece of threat data into their security program. Without a clear vision, threat intelligence can become an expensive drain on resources with little to any real value.
Threat Intelligence Challenges
As more organizations begin to adopt threat intelligence practices into their security programs the need for a more structured implementation path has become greater. Threat intelligence implementation is a marathon process which needs to be carefully planned and executed to ensure it is agile and built on a strong foundation.
Understanding some common challenges organizations have faced while building their threat intelligence program can provide valuable information to those organizations looking to adopt threat intelligence into their security monitoring program.
Does Not Align with Business Goals
One of the biggest mistakes made when implementing a threat intelligence program is the failure to ensure its use is identified by a risk to the business. When evaluating threat intelligence feeds, security teams will want to identify the business problem they will help solve and examine how they will utilize these data sources in conjunction with their internal threat intelligence feeds.
Performing a risk assessment can help identify the risks an organization may face and what can be done to minimize its impact on the business. This practice will arm an organization with valuable information on how best to protect their business and what types of intelligence will make the most impact for their organization.
Choosing the Wrong Intelligence Data
Over the past couple of years, threat intelligence data or feeds have become synonymous with a threat intelligence program. This data is a crucial part of an intelligence program, but without context, an organization runs the risk of adding yet another data source without fully recognizing its value. When evaluating threat intelligence data, consider the following:
- What is the focus?
A majority of threat intelligence feeds focus on a single area of interest such as malicious domains, IP addresses, or hash values. Knowing how these feed types will be utilized within your organization will determine their overall value.
- Where is the information gathered from?
There is an endless number of free and paid threat intelligence subscription services available to take advantage of, but not all data sources are created equal. There are six main types of intelligence data sources to be aware of when evaluating a threat feed:
- open source
- malware processing
- human intelligence
- internal telemetry
Organizations will want to have a good understanding of where these feeds are derived from and ensure, especially if they are delivered via a paid service, that they can be evaluated against their internal intelligence to recognize their maximum potential.
- What frequency are they updated?
Ensuring threat intelligence feeds are updated and relayed at near real-time is an invaluable feature of any reputable data source. Ingesting stale or incomplete data can cause an organization to focus on the wrong objectives which can lead to data overload and alert fatigue.
Asking these questions when evaluating a new threat feed will help identify what sources of intelligence may be the best fit for your business need, but the real value will be displayed through its analysis. Performing proper analysis of a threat intelligence feed is what will provide the context necessary for an organization to make operational changes to better secure their environment. Without analysis, these feeds become another potentially costly, unmanageable source of noise.
Failure to Operationalize Intelligence Data
The ability to utilize threat intelligence data in an operational capacity is the ultimate goal of a threat intelligence program. A successful program will present an organization with greater insight into the potential threats their environment faces and provide its security team a way to prioritize their alerting based on the risk it poses to business. Failure to align an organization’s security program with their business objectives can have a direct impact on the intelligence sources they utilize and how they are able to operationalize their intelligence.
Overcoming these challenges while implementing a threat intelligence program can be tricky. It is an ongoing, and at time tedious, process which if implemented correctly will adapt as your business grows. If you do find yourself up against any of these challenges, take a step back and make sure that the utilization of the intelligence source fits a business objective, is sourced appropriately for its use case, and it can be utilized to make operational changes. If you can answer yes to all of these criteria, you are on your way to achieving a higher cyber threat intelligence.
SANS recently released their 2018 SOC Survey and many of their findings were of no surprise to anyone who has been responsible for maintaining their organization’s security posture. Many respondents reported a continued breakdown in communication between NOC and SOC operations, lack of dynamic asset discovery procedures, and event correlation continues to be a manual process even though SOC staffing is being worn thin by the surmounting responsibilities they have to take on.
Why Measuring SOC-cess Matters?
Anyone who has been a part of a security team knows these issues are an everyday battle, but those “common” issues were not what caught me off guard. The most shocking statistic I gathered from this survey is that only 54% of respondents reported that they are actively using metrics to measure their SOC’s success! I was taken aback by this finding and couldn’t help but wonder if all the other reported SOC deficiencies could be directly related to this missing link?
I have been in the security industry for close to ten years, most of which was spent as a SOC analyst and SIEM engineer for a large MSSP. It was my responsibility to be an extension of my client’s security arm and those clients ranged from large Fortune 500 companies to small family owned businesses. Each client was unique, what one found to be important, another thought of as noise. The diversity between each of these clients taught me early on how important it is to understand what their definition of success was so that I may help them to not only achieve their security goals but to assist them in staying ahead of today’s rapidly expanding threat landscape.
This diversity also taught me another valuable lesson: not all security programs are created equally. Naturally, my larger clients had a more mature security posture, they knew what they wanted and what it would take to get them there, and they had the funding to back it up. Unfortunately, some of my smaller clients were not as lucky. They were severely understaffed, their IT department was the Security department, they lacked adequate funding to stay ahead of the ever-growing security curve, and in many cases, the measurement of success resembled a game of whack a mole.
Does this sound familiar? If the answer is yes, you can rest assured that you are not alone. Even the most secure, highly funded organizations have struggled with these obstacles. However, I believe one of the biggest differences between these organizations and the organizations striving to be like them isn’t directly due to the lack of funds, but instead the metrics they are using to show value in what they are trying to accomplish.
Don’t get me wrong, funding is and always will be an obstacle that organizations, large or small, will have to overcome when trying to build and maintain a security program. But the larger and more dangerous obstacle is the one we are creating for ourselves by not measuring and monitoring our security strengths and weaknesses through a strong security metrics program.
This type of security program will be as different as the organization it aims to define. To truly understand what success looks like for you there are a few recommended tasks, that when completed, will give you a greater understanding of your environment and a strong foundation for your security metrics program.
How to enhance your security program
- Conduct a risk assessment
A risk assessment is meant to help identify what an organization should be protecting and why. A successful assessment should highlight an organization’s valuable assets and showcase how they may be attacked and what would be at stake if an attack is successful. Armed with the results of this assessment, organizations can not only begin to address their deficiencies but now have a solid set of metrics that they can use to measure their success as they move forward.
- Perform vulnerability assessments
Vulnerability assessments are another vital security tool which is designed to detect as many vulnerabilities as possible in an environment, and aid security teams in prioritizing and remediating the issues as they are uncovered. All organizations regardless of maturity will benefit from these types of assessments, but organizations with a low to medium security posture may benefit the most. The result of these assessments will help give greater definition to what an organization’s metrics should consist of and what steps are necessary for continued success.
- Adopt a security framework
Even if you are not held to a compliance standard, adopt a security framework anyway. I understand that choosing a framework to model form does not guarantee an organization’s safety, but it is proven that those organizations who adopt a standard have a higher security maturity and are more likely to identify, contain, and recover from an incident faster than those who do not follow security program’s best practices. These frameworks, in conjunction with the security assessments mentioned above, were built to give organizations a blueprint of how to best protect their environment and measure their successes.
I sincerely believe in the value of a rich metrics program and have seen first hand what it can do for an organization. With the level of sophistication in today’s cyber attacks and the environments they target, we can no longer afford to leave our security up to chance. It is my hope that when SANS publish their SOC Survey for 2019, that we have taken the steps necessary to change this statistic because I know as an industry we can do better.
If you want to read more about KPIs and the metrics that we suggest should be set, monitored and measured for a more efficient and effective security program, read our white paper titled “Key Performance Indicators (KPIs) for Security Operations and Incident Response”.