Effectively Reporting Vulnerability Management Performance

by Brian Petzold | Jul 27, 2018

ReportingVunManagement

 

The Board of a financial institution is responsible for ensuring that the vulnerability management program is effective, often delegating monitoring to senior management. IT departments diligently provide reports from their vulnerability management system to help show the effectiveness of their program, but one of the common complaints that we hear from management is that these reports do not do a good job of showing progress. This week, we will look at some common ways of measuring vulnerability management performance and highlight some of the issues you may encounter with each.

  • Letter Grades: Some vulnerability management systems try to assign letter grades to assess performance. In school, letter grades are easily calculated by dividing the total number of questions by the number of correctly answered questions, then creating a letter grade based on how the average student performed. To understand why this does not translate well to vulnerability management, imagine a test where questions (or vulnerabilities) keep getting added at random times with every student (or IT environment) receiving completely different questions (vulnerabilities). The grade would be based primarily on timing of the report and luck, not providing much of a measurement of performance at all. If you are using letter grades to measure the performance of your vulnerability management program, be sure you understand how the grade is calculated.

  • Risk Scores: Many vulnerability management systems have devised a method to calculate the average risk of vulnerabilities in your environment. In some cases, these scores are simply based on vulnerability severity scores averaged across your environment. This does not take the age of vulnerabilities into consideration, so if Microsoft announced critical vulnerabilities the day before running the report, you will receive a poor risk score. Other risk score methods which are better take the vulnerability age and sometimes the business risk of the asset itself into consideration. Be sure to understand what goes into a risk score before using it to measure performance. 

  • Aging Reports: Aging reports attempt to measure performance of the vulnerability management program by measuring how quickly vulnerabilities are remediated. Aging reports can be based either on the date that a vulnerability was first discovered on a system, or the date that the vulnerability was first made public (or “published”). We recommend using the published date (if your system supports this) because it will highlight whether new systems are being added to the environment with older vulnerabilities and instances where older vulnerabilities are recurring. 
     
  • Independent Vulnerability Assessments: In addition to internal vulnerability assessments, institutions hire outside parties to perform independent scans using separate detection systems. It is rare that the results of these assessments exactly match those of the internal vulnerability management system, because they detect in different ways and usually use different scoring systems. We recommend sitting down with the independent party with both sets of reports and having them help explain the differences.

We're often brought in to help assess the performance of vulnerability management programs. We do a "gut check" on where your institution is compared to regulatory requirements and other financial institutions allowing you and your board some real insight. If it sounds like something you're interested in give us a call or shoot us an email.

 

Let's Get in Touch!

Want these articles delivered weekly to your inbox? Subscribe to our Newsletter!

Recent Posts

Stay in the Loop!