undefined

Cyber Liability Insurance is one of the fastest growing insurance products on the market today. While the product has been around for a number of years it has only recently starting to gain traction amongst IT and security professionals after a number of high-profile security breaches and the incoming General Data Protection Regulation, which will only see this market expand further.

If Cyber Insurance is the next addition to your service portfolio, or even a current product on your portfolio you need more understanding of the potential risks that a loss across a technology, sector or specific company may bring. Failing to fully understand your client’s risk either makes the policy too expensive, or makes you negligent, so which is it to be?

The challenge is that currently, there’s little market intelligence available to help you quantify the risk to your client’s infrastructure, their business or their clients. In fact there isn’t really anything that can genuinely help you understand the risk associated with a specific type of technology your client employs, or the latest threats to the industry sector in which they operate.

Likewise, there’s no model that helps you predict what or where the next breach might be, or to understand the financial impact to the insured’s business. So what can you do?

A Moving Target?

Given there is never likely to be a claim database that can help predict risk with a high degree of granularity (at least not reliably), like most of these kind of things, it is the human factor that takes away the majority of the predictability.

With the attack surface of organisations’ infrastructure continually changing, using past claim data as a means for predicting the future of cyber risk is the equivalent of trying to use a spoon to dig the next Channel Tunnel – you’re not going to get anywhere fast and by the time you get there, the UK might have left the EU.

This constant evolution of threats has created a latent need for continually monitoring a client’s exposure to risk, aligning this with changing threat profile and vulnerability exposure. Alongside these should sit factors such as geographic location, industry sector and technology type, and finally score each element against every other business connected to the internet– whether they’re on your books or not.

Applying a capability such as risk alerting, which helps inform your target market segment on a daily or hourly basis, creating awareness of risk change and potential claim aggregation scenarios is now a distinct possibility. And being able to accurately price based on all of these factors is both tangible and scalable.

Remodeling Insurance Risk

This model will soon become commonplace for the insurance sector. This type of analysis is already applied to other markets, giving organisations a real-time view of where they should improve their security and how best to invest scarce budget to achieve the biggest return – i.e. a robust, offensive security posture.

Entire market sectors and the infrastructure of whole nations have already been mapped out, providing an accurate and up-to-the-minute view of these rankings to CIOs and security chiefs, with the same framework now in place to fuel the cyber insurance market.

Ultimately this new model enables both insurance providers and their respective IT departments to better understand new trends in vulnerability exploitation and accurately predict how they could impact their organisations in an interconnected cyber insurance marketplace.

Pete Shoard, chief architect at cybersecurity service provider, SecureData