Prioritization-The Key to Effective Vulnerability Management
By Genady Vishnevetsky, Chief Information Security Officer, Stewart
For decades we have been trying to solve the problem of managing vulnerabilities. However, looking at security breaches over the last fifteen years, we are not making much progress. Why is that we have not been able to solve this problem? First, because there is no prescription for it and it’s difficult. The software we use is getting more complicated every year. Legacy operating systems and applications cripple us because businesses want to keep them around if it generates revenue. Use of open source code and common frameworks (i.e., Java, Adobe Flash, Macromedia, Adobe Air) create new dependencies and impediments for later patching. Physical walls are erasing, and we fully emerge in the virtual, always connected world. We have tiny computers (IoT devices), the size of the thumbnail that are powered by the battery and attached to our network. Most importantly, we are all humans, so we make mistakes. We make errors when we build the code, configure and manage our infrastructure.
o how can we learn from all these breaches, our own mistakes and make it better? By no stretch of the imagination, I found a silver bullet. Remember, one can only influence what one can control. It is time we stop boiling the ocean by trying to patch everything in thirty/ sixty/ninety days. We should stop believing that software and application will be error-free.
Here are a few ideas that may help you take your life back:
• Not everything is created equal - prioritize
• Reduce your scope - determine a risk value for every asset. When ready to tackle, start with what is the highest risk to your business
• The risk to an enterprise is a business decision, not yours. Get business involved. What you may perceive as higher risk may not resonate with your peers
• If you can’t solve the problem, develop compensating controls. We can’t eliminate every threat. Best risk management strategy is in risk reduction
So, let’s take a closer look at some of these ideas.
Not all endpoints created equal, so don’t boil the ocean. Not all assets have the same value and thus the risk to the business. A web server is sitting on a DMZ, servicing core application that generated half of your company monthly revenue poses a different risk from a web server buried deep inside your network that serves your management tool. You should treat them differently. Assigning a score to every asset that based on criticality to the business, location (DMZ or segmented off), purpose (database, web server, utility), whether it runs legacy operating system or application is imperative. Add segmentation or any existing compensating controls, then group your assets by priority. There is no magic formula and outcome will be different for every enterprise. This exercise will lead to organizing assets by what is essential to the business. Smaller subsets are more manageable to test and create less friction. Now you can prioritize.
"One can only influence what one can control"
So when we patch, we are trying to address everything. It is almost impossible. Please tell me if this does not resemble your environment. Microsoft releases an update to the Dot Net framework that subsequently modifies a library. Your legacy application is no longer works. You file policy exception, and off we go - your system never patched again. Think of how many critical applications have dependencies on the older version of Java. So how do we solve this problem? CVSS score by itself is not significant unless put in the context of your environment. According to Kenna Security 77 percent of CVE’s have no published or observed exploits, 21.2 percent of CVE’s have an exploit publicly released, and only 1.8 percent of them have been actively utilized. Your overall efficiency of patching “everything” drops after 15 percent of coverage. Start small - build your patch management program to address everything that has active exploits in the wild.
Anything you can’t patch – isolate. Technology made enormous progress. Next-generation firewalls not only allow to segregate traffic at the network layer but also an application and in some instances provide virtual patching. Host Intrusion Detection Systems and Application Control solutions can serve as a viable compensating control to preserve the integrity of the host. Don’t limit your risk with the legacy application just to data compromise, because old architecture can lead to lateral movement. Micro-segmentation and application virtualization can limit exposure from legacy operating systems.
All applications accessible from the Internet should be the highest priority regardless of whether you develop or purchase it. Like with anything else, good security hygiene helps. For in-house developed applications add security checkpoint early in your Software Development Life Cycle. Start with developers training on secure coding; then the use of security tools will warrant the progression of error-free code delivery to production. The first layer can live in the Integrated Development Environment (IDE). Some vendors offer IDE plugin that allows the developer to check for security flaws as they code. Then Static Analysis Security Tools (SAST) will examine the entire code branch for known vulnerabilities. Lastly, Dynamic Analysis Security Tools (DAST) will ensure that no configuration weaknesses introduced when the system installed in production. I realized it is easier to say than do. We are facing shorter development sprints and more aggressive delivery schedule. If you can’t address all vulnerability right on the spot, use other security instruments as a temporary compensating control. Web Application Firewalls (WAF) have been around for a while and are very mature. They can block certain attacks from executing on your web application. Its successor Run-time Application Self- Protection (RASP) tool is gaining popularity as it moves closer to protect app itself at the host level.
I don’t think there is a right or wrong answer in this journey, but firmly believe that, “doing the same thing over and over again and expecting different results is a sign of insanity.” Let’s try a different approach to our vulnerability management program, see if it makes a difference.