The average enterprise has over 60 thousand assets and more than 24 million vulnerabilities, with dozens of new ones discovered each day. Manually analysing, correlating, and prioritising each vulnerability isn’t humanly possible – even for large security teams. Their time is limited, they’re under immense pressure from executive teams to fix vulnerabilities that are in the headlines (whether or not those vulnerabilities are likely to ever become credible threats), and there’s simply too much data coming in too fast for them to ever gain the upper hand. The Internet of Things (IoT) and big data solutions are adding even more to this data deluge, making what was already a losing proposition that much worse.
Yet despite all this vulnerability data, attackers only utilise 1% of all vulnerabilities. Without a powerful, intelligent model, powered by data science and machine learning, security and vulnerability teams are forced to ‘guess’ at which vulnerabilities they should patch first. Many will use the Common Vulnerability Scoring System (CVSS) to help them narrow the list, but CVSS is of relatively limited value primarily because it’s static in nature. In addition, it really only assesses the vulnerability itself, in isolation – it doesn’t consider other critical information such as the value of the asset, the current threat environment, active breaches, and what attackers are doing in real time. It’s only by gathering, correlating, and analysing all of this data together that security and vulnerability teams can truly understand their true risk, and prioritise what actions to take first to remediate that risk.