5 Practical Ingredients to Perform CriticalVulnerability Management Testing

By Tim Callahan, SVP, Global Chief Security Officer, Aflac

Tim Callahan, SVP, Global Chief Security Officer, Aflac

We should all be careful not to overuse the public lessons learned from the Equifax event. There are probably facts that an outsider just doesn’t know, and there likely are deeper root causes that led to their breach. But, we should not discount what we have learned and are able to judge regarding the effectiveness of our various programs in light of these lessons.

The most obvious lesson is that an attacker exploited a known vulnerability to compromise the network and to steal data. If nothing else, the breach provided us the opportunity to examine and explain our program to the most senior leadership in our company. The process was explained to our oversight committees and gave us the opportunity to talk about vulnerability management with each of our businesses. In the end, the process went through scrutiny it had not previously received.

"Vulnerability Management is a broad program to identify all vulnerabilities in the technology environment, and it’s a distinct security function to manage"

As this was being discussed, it became obvious there is some misunderstanding between the relationships and differences between Vulnerability Management and Patch Management. Many people use these programs interchangeably or synonymously. I believe they are distinct programs with a common goal to achieve securing the technological environment. The distinction is important to be able to define responsibility, accountability and ensure appropriate segregation of duties.

Vulnerability Management is a broad program to identify all vulnerabilities in the technology environment, and it’s a distinct security function to manage. Patch management is the remediation of vulnerabilities for which there is a patch and clearlya technology function. There are vulnerabilities that a comprehensive Vulnerably Management program will discover for which the fix is not a patch, but may require coding efforts, reimaging, rebuilding or other means to fix it. There are vulnerabilities that are discovered by outside sources and security researchers that must be incorporated in the Vulnerability Management program and may require efforts beyond the capability of company resources to remediate.

I have spoken to technology and security people who claim they have a Vulnerability Management program, but they were only patching vulnerabilities based on the manufactures patch release cycle. In this scenario, they would patch based on the criticality classification (Critical, High, Medium, and Low) that the manufacturer released rather than risk assessing and classifying based on the risk tolerance defined by their companies’ risk management process. This may describe a mature patch management process, but would not be considered Vulnerability Management.

It is important to make a company risk decision about what vulnerabilities will be remediated and the acceptable timeline. My experience has been that “Critical” vulnerabilities are remediated within 72 hours, “Highs” within 30 days, “Medium” within 90 days, and “Low” vulnerabilities are monitored but remediated within normal release cycles.

A mature Vulnerability Management program at a high level involves: Identification, Assessment, Communication, Mitigation, and Tracking and Reporting.

1. Identification: This step is where vulnerabilities are found through several methods. The major ways:

Vulnerability Scans: This process requires a vulnerability scanning tool or service. A scan should include all devices within the internal and external (outward facing) network. Effective scanning should ensure that subnetworks, VLANs, etc. are covered. Scanning should compare devices with the last scan and asset inventory system to determine if any new devices show up on the network. These should be investigated to ensure they are authorized. These scans should be conducted weekly. It there is concern that it may affect network performance, the network can be divided into segments to scan. Some companies will scan in offhours only. This may be acceptable, but there has to be a process to ensure that laptops and desktops that may not be connected off hours are still covered.

Third-Party Sources: This would involve the manufacturers’ list as well as other free and paid services that report on vulnerabilities. Monitoring for relevant vulnerability information from appropriate vendors, feeds, third-party research, and public domain resources should occur on a daily basis.

Third-Party Vulnerability Assessments: Companies should ensure there is an independent third-party assessment at least annually. This is usually conducted as part of an annual Penetration Test.

2. Assessment: This step is where vulnerabilities that are identified are assessed against the company’s risk appetite and for applicability. Most scanning tools and services take feeds from multiple sources and use the Common Vulnerability and Exposure (CVE) list. This, along with the product own assessment, will provide a criticality factor. Generally, these ratings show inherent classification instead of residual, (the rating after protections are taken into account).

The assessment process looks especially at those vulnerabilities that are rated “Critical” to determine if there are mitigating factors that would lower the rating. Some organizations also use the process for those vulnerabilities that are rated “High.” This step is designed to take the inherent criticality rating and assess against the layers of protection the company has that would reduce the risk. This is important because there are always more vulnerabilities than IT can reasonable remediate. Security should be providing those that pose the most risk.

3. Communication: There must be a defined process to communicate vulnerabilities detected along with criticality factor to appropriate technology personnel to begin the remediation/mitigation process. Often companies will have a distribution list based on technology ownership to communicate on a regular basis.

4. Mitigation: Once communicated to the appropriate technology partner, the remediation period begins. The technology team must determine how to remediate the vulnerability. If there is a patch, then they would apply the patch. If there are other efforts such as coding, then that process would kick off. Technology partners may be able to take mitigating steps to lower the criticality factor and they should work with security to ensure agreement on those steps. For instance, a web vulnerability may be mitigated by a web application firewall. While this would not be a remediation, it may provide protection to lower the rating and permit the fix in a calmer fashion.

5. Tracking and Reporting: A mature program will involve a methodology to track vulnerabilities, maintain a historic record, chart trends, metric against defined criteria and track remediation/mitigation efforts. It is not enough to just report vulnerabilities to IT and forget about them. A tracking system should be part of the program. Some companies will incorporate this in an IT Governance, Risk and Compliance tool and some automatically ingest vulnerabilities directly from the scanning tool. Part of this process is to ensure the remediation has taken place. For instance, if the same vulnerability is reported on the same device, that would be an indicator that the remediation effort was not effective. This would lead to further action to determine why that did not fix the problem. Reporting is designed to ensure awareness to senior management of the efficacy of the Vulnerability Management program.

By effectively addressing all of these components— Identification, Assessment, Communication, Mitigation, and Tracking and Reporting—companies can implement and experience a mature Vulnerability Management program that achieves desired business objectives.