THANK YOU FOR SUBSCRIBING
More than 20 years ago, David Mann and Steven Christey proposed the idea of a Common Vulnerability Enumeration (CVE) in their seminal work, Towards a Common Enumeration of Vulnerabilities (1998). When the CVE was adopted by MITRE shortly thereafter, the meaning changed slightly to: “Common Vulnerabilities and Exposures.” This shift in nomenclature has a powerful lesson to teach us about vulnerability management decades later. Namely, not all bugs are created equal, nor does every bug render a system the same level of “vulnerable.” In fact, flaws without a corresponding threat are not a vulnerability at all; they are technically just an exposure (as was passionately articulated to the MITRE committee by self-described pedant, Gene Safford, years ago).
In 1998, there were a few hundred vulnerabilities assigned a CVE. In 2021, the volume of disclosed vulnerabilities is surging by comparison: nearly 20,000 unique bugs were discovered in a single year. Not to mention, the modern enterprise has other vulnerability data to contend with as well such as the data from penetrationtests, application security tools, open-source dependency graphs, etc. This data doesn’t even come with a CVE to help track it, but it competes for your remediation team’s finite attention, nonetheless.
Patching every bug is logistically impossible for most organizations. And even if it is operationally feasible, it’s still likely inappropriate and cost-prohibitive. It may sound noble to try to remediate each deficiency, but the reality is only a very small percentage of all vulnerabilities will ever be adopted by threat actors and used in a real-world cyber-attack (approximately 2 percent). The wasted effort of remediating all vulnerabilities presents an opportunity cost to the business, not to mention, remediation may introduce the risk of system instability.
“It may sound noble to try to remediate each and every deficiency, but the reality is only a very small percentage of all vulnerabilities will ever be adopted by threat actors and used in a real-world cyber-attack.”
Pinpointing which vulnerabilities carry substantial risk and thus deserve prompt remediation may seem daunting at first, but it’s a relatively straightforward data science problem. Let’s briefly discuss what kinds of factors best predict when a vulnerability will be weaponized.
First, it’s best to start with a strong foundation. Consider the intrinsic attributes of the vulnerability itself—Is it remote exploitable? Does the exploit require user interaction? Does the impact extend beyond the scope of the vulnerable component? etc. Next, evaluate the maturity of proof-of-concept exploits for the vulnerability—Are the technical details of the bug known? Is there a working script to detonate the vulnerability on common repos like Github or ExploitDB? How much effort would it take the publicly available resources and craft those into a bonafide exploit suitable to be used in a real-world incursion? What mitigating controls are in place (e.g., EDR, SIEM, NGAV, etc.)?
Evaluate if the bug is known to be used by threat actors and/or malware. Open-source intelligence can be invaluable in this process of identifying vulnerabilities relevant to your threat model. For example, if you're worried about ransomware (and who isn’t?) priority should be given to bugs leveraged by ransomware actors in real-world intrusions. If you suspect you are in the crosshairs of Fancy Bear, elevate remediation priority for the vulnerabilities targeted by this Russian adversary with a proclivity for espionage. Conversely, some vulnerabilities may be under active exploitation but remain outside of scope for your organization. Pay close attention if reporting suggests that an exploit is “highly targeted”. It is not uncommon for vulnerabilities to be used in hyper-specific regional contexts, particularly by nation-state actors conducting surveillance. If a threat is not germane to your industry or geography, it should be deprioritized accordingly.
Other signals that can help you predict exploitation include social media chatter and honeypots. If a bug is being discussed all over Twitter, it could be significant. Similarly, if honeypots are detecting scans or full-blown exploitation attempts for a particular vulnerability, this should raise the priority level for that bug.
The exact algorithm you use will depend on what data sources you have available. There are many paths to success and many opportunities for refinement here. As long as the model you choose is predictive of exploit and logically defensible, you should be well on your way to a successful vulnerability management program.
By homing in on the tiny subset of truly dangerous bugs, you can have the best of both worlds in vulnerability management. You get to be an excellent steward of vulnerability risk for the organization. At the same time, you also lift a tremendous burden from operational teams that are chartered with researching, downloading, regression testing, and applying the deluge of security mitigation.
The optimal remediation strategy in the modern era looks drastically different than it did in 1998. The goal is not to fix everything. Rather, the goal is to instrument all of your vulnerability data so that you can identify everything that you truly need to fix. If applied properly, data science and machine learning can help you safely eliminate more than 90% of your remediation workload while simultaneously reducing your organization’s exposure to vulnerability risk. We owe a great debt to Mann and Christey for the interoperability we now enjoy thanks to the CVE standard. Still, laser focus is necessary to successfully navigate the growing storm of CVEs and to discern between mission-critical vulnerabilities and mundane exposures.