Protect - "Develop and implement the appropriate safeguards to ensure delivery of critical infrastructure services." Music Box - 5th Floor BRIEFING
May 08, 2018 05:00 PM - 05:30 PM(America/New_York)
20180508T1700 20180508T1730 America/New_York Vulnerability Remediation Orchestration and the NIST Vulnerability Management Model

1.     To provide a picture of the real world implications of implementing the NIST Vulnerability Management model within the context of today’s protective technology landscape.

2.     Show how orchestration principles can be applied to vulnerabilities to maximize the effectiveness of protective technology within the NIST Vulnerability Management model.

3.     To highlight the importance of vulnerability data compression and a vulnerability data rubric in achieving the NIST Vulnerability Management model.

4.     How to address operational challenges implied by use of protective technology based on data volume.

The NIST Vulnerability Management model has many excellent tenets in prescribing the ideal objectives of any vulnerability management program. One of the key tenets in describing protective technology is “Vulnerability scans are performed.” Many organizations do scan their environments and attempt to follow the NIST model tenets.

Practical implementation of the NIST model requires a significant effort in realizing the stated objectives beyond bringing in scanner data . The tenets are simple, but the reality is much more complex and challenging. Contributing factors are very similar to the challenges in incident response, including large data volumes, disparate data silos, and coordination of human and machine resources within standard processes. Key challenges in implementing the NIST model are discussed.

 

Supporting Content

Challenge 1

The NIST Vulnerability Management model states “Asset vulnerabilities are identified and documented.” Real world protective technologies generate large data volumes and are prone to inaccuracy and data noise. Scanners are trying ...

Music Box - 5th Floor HACK NYC 2018 events@magegroupe.com
29 attendees saved this session

1.     To provide a picture of the real world implications of implementing the NIST Vulnerability Management model within the context of today’s protective technology landscape.

2.     Show how orchestration principles can be applied to vulnerabilities to maximize the effectiveness of protective technology within the NIST Vulnerability Management model.

3.     To highlight the importance of vulnerability data compression and a vulnerability data rubric in achieving the NIST Vulnerability Management model.

4.     How to address operational challenges implied by use of protective technology based on data volume.

The NIST Vulnerability Management model has many excellent tenets in prescribing the ideal objectives of any vulnerability management program. One of the key tenets in describing protective technology is “Vulnerability scans are performed.” Many organizations do scan their environments and attempt to follow the NIST model tenets.

Practical implementation of the NIST model requires a significant effort in realizing the stated objectives beyond bringing in scanner data . The tenets are simple, but the reality is much more complex and challenging. Contributing factors are very similar to the challenges in incident response, including large data volumes, disparate data silos, and coordination of human and machine resources within standard processes. Key challenges in implementing the NIST model are discussed.

 

Supporting Content

Challenge 1

The NIST Vulnerability Management model states “Asset vulnerabilities are identified and documented.” Real world protective technologies generate large data volumes and are prone to inaccuracy and data noise. Scanners are trying to improve their internal result sets, but organizations need the entire picture across multiple scanners for true accuracy. Combining multiple data sources from disparate systems to get a complete and accurate understanding of the vulnerability landscape is difficult. Asset uniqueness if often a hinging factor due to vulnerability correlation only being as good as the location information. Incorrect asset information can in turn degrade vulnerability data.

Solution 1

The process of gathering large data volumes for enrichment and correlation have been implemented within the incident response sector as a part of orchestration. Using the same orchestration principles to gather the appropriate data automatically at time of scan ingestion can significantly enhance the accuracy of the identifying the asset vulnerabilities and serve as a foundation for subsequent workflows, particularly the asset uniqueness.

Orchestration requires many integration-type connectors to bridge the data silos and pull all of the data into a single location. Connectors in turn require open APIs, e.g. no walled gardens, and maintenance in order to be effective.

Once system interconnections are made and data ingestion happens, data grooming is the next step. Large volumes of data must be compressed as much as possible to reduce subsequent effort. Correlation is the first step to ensuring that the workload is minimized as it can result in effective data compression. Correlation of vulnerability data is often more difficult than just aligning CVEs as many do not have assigned CVE data. A detailed map of vulnerabilities across the implemented scanners is needed in order to correlate effectively.

Asset correlation is the next step in compression. Asset uniqueness comes into play in getting accurate counts of true vulnerabilities. CMDB information is frequently inaccurate, though it may be referenced. While scanners may try enhance asset uniqueness within their scan set, when many different scanners are running, the asset overlap between scanners also requires correlation.

For the purposes of vulnerability management, assets boil down to the identification of the specific location of the vulnerability. Asset correlation and uniqueness requires specific rulesets and some knowledge of the dynamic network environment to realize good compression. Even with well-defined rules, methods for tuning, enriching asset information, and handling edge cases significantly improve the value of any vulnerability data. Rulesets and zoning can significantly help in ensuring accurate asset information with easy manual input levers. Outliers can be addressed when encountered if standard actions for merging and separating assets are available based on individual and bulk routines. Some examples of standard rulesets and zone configurations will be given.

Verification of vulnerability data is the next step in data compression. Many scanners produce large volumes of false positive results. Scanners would prefer to err on the side of more verbose than miss a true vulnerability. Verification helps to ensure risk calculations are more accurate by assessing the confidence of any vulnerability data point.

Verification only works if you have accurate asset information, some automated means for gathering verification information, and a standardized format to feed that verification information into the vulnerability data. Many verification processes are performed via scripts, which are difficult to track and standardize. Some tools and frameworks can assist in addressing this problem and a few examples will be presented.

Challenge 2

The NIST Vulnerability Management model outlines “Risk responses are identified and prioritized.” Large data volumes create large workloads. Alignment of the combined data with an effective risk calculation to achieve an outcome based on policy is a challenge because the data formats vary, the data itself requires a significant amount of conditional cases to handle unique circumstances, and there is a large volume of the data.

Solution 2

The accuracy of a risk calculation by definition is only as good as the data input, i.e. the compression quality. The compressed data from Solution 1 feeds the risk calculation such that the most accurate picture is available.

Risk ratings must be tuned to the unique environmental circumstances, such as the network topology, so they cannot be entirely automated without human oversight. The large volumes of vulnerability data require channels to group risks by type with mechanisms for tuning based on environmental situations individually and in bulk.

Risk can be in turn used for policy enforcement. Efficient policy adherence requires good situational awareness, that is, understanding the affected systems, the risks to those systems, and the potential impact. There are many conditions involved in ensuring policy is met which require a framework for evaluation and pre-existing definitions of impact. These conditions must be documented and also considered during the automated assignment of prioritization with the capability to handle exceptions.

To achieve policy enforcement, human resources must be alerted and aware of deadlines both internal to the vulnerability management team and externally for the resources responsible to remediate the vulnerabilities. The cross-functional nature of remediation again requires orchestration principles of system interconnection. Escalation paths and procedures should complement the standard paths for handling out-of-policy cases with comprehensive reporting. All of this must be contained within a workflow solution to eliminate the human enforcement overhead and reduce human error.

Challenge 3

The NIST model boils remediation down to “newly identified vulnerabilities or documenting as accepted risks.”  Remediation requires resource allocation on a human level and operationalization of those human resources. Due to the data volume, there are often more vulnerabilities than resources to address, fix, or even document them as acceptable risks. Allocating resources towards the remediation effort and tracking the involved effort can be complex to nearly impossible. Remediated vulnerabilities may have never been marked as such, creating another challenge of data bloat.

Solution 3

To expedite the resource allocation, one must consider the full operational picture. Resources must be grouped into pools to ease task assignment. Queueing algorithms can be leveraged to automate assignment, including Round Robin, top-to-bottom, and First Available. The process by which the tasks and information as made available to resources must follow documented procedures.

Remediation resources often spend significant time understanding the requirements for implementing the remediation for a given vulnerability. This detracts from the ability of one resource to be as efficient in realizing remediation.

A vulnerability data rubric can eliminate the majority of the time spent preparing data for consumption by remediation resources and also help expedite the work of a remediation resource. Vulnerability enrichment data are often not standardized across scanners. Sometimes vulnerability information requires organization specific details related to business impact or remediation policy.

A vulnerability rubric for the organization that spans across all vulnerabilities is required to standardize the vulnerability description, impact, remediation instructions, and verification instructions. A good vulnerability rubric can greatly improve the outcome at later stages in the vulnerability lifecycle. This can be generated as a one-time effort from a starting library, often beginning with the NVD and then providing an organization specific overlay. The rubric must be maintained and updated over time to ensure new vulnerabilities are included and data accuracy is maintained, but this is a smaller effort when considering the amount of time is often spent prepping remediation instructions or remediators spending time trying to gather this information themselves.

Remediation is more efficient when human resources do not have to switch between systems. External systems where remediation resources actually work are the best targets for distributing remediation tasks. For instance, developers work in SCRUM tools and would find it much easier to consume vulnerability information in their typical go-to system versus another tool. Orchestration principles apply in this case. Tracking must be central while remediation information is then distributed across other systems. This calls for a bidirectional integration case between vulnerability management and ticketing.

Additional logic should be included for remediation marking to ensure accuracy. Automation can help with reminders to mark data accurately, but cannot cover every case. Vulnerabilities within a data set that cease to appear in scans may potentially be closed, even if they were not marked as such by the human resource or simply went away during a standard update. Business logic that considers the data and marks the remediation based on acceptable criteria assist in truing up the accuracy of the remediation work pool. As with any automation, levers for human management are required for oversight. A ruleset and scope of accurate asset data is needed to allow for automatic marking of data. An example of this will be shown.

Challenge 4

Vulnerability data has implications that may be important to incident responders or other parties within the organization. Vulnerability data should not exist in silos. Documenting the unremediated risks, which are often changing in real time, and making that information available for consumption within the organization is not possible without automation.

Solution 4

All vulnerability management systems should have pre-existing hooks that allow the internal data to be leveraged from external systems. Providing a standardized API as well as external integration connectors on common systems can go a long way towards providing data visibility across the internal organizations. The data must be real time as it changes so often. Incident responders in particular should be made aware of concise, accurate vulnerability information to consider during the course of a response. This kind of information can be made available directly assuming that the vulnerability data has gone through compression and been enriched with other internal information, such as asset data.

Summary

Orchestration methodology is a very useful concept for application to many of the challenges in implementing the NIST Vulnerability Management model, much as they are in implementing the NIST Incident Response model. The concept alone does not solve the problem as the framework must be created and backed up with automation due to the large data volumes.

Asset and vulnerability uniqueness will be a major part of ensuring all tenets of the NIST model are more achievable. Data compression and standardized approaches for the data flow are a must.

Comprehensive vulnerability management plans should consider how orchestration should work within the individual organization environment across all vulnerability types. Tools implemented should support open concepts for agnostic integrations as well as support open data visibility from other systems to realize true value and effectively support the NIST model.

Vice President Of Product Management
,
NetSPI
No moderator for this session!
Attendees public profile is disabled.
Upcoming Sessions
638 visits

FAQ

 Code of Conduct  Press Releases
 Get Involved   Report ISSUE / BUG
Tickets  

The Critical Infrastructure Association of America, Inc. is a 501(c)6 Not for Profit. The mission of Critical Infrastructure Association of America is to create a membership-based, trade association of like-minded cybersecurity and closely related industry professionals that work in the field of cybersecurity. The goal is to share best practices, establish and maintain high operational standards and to educate and interact with those in the cybersecurity community within public, private and governmental sectors.