June 25, 2019
Tokenization: Key to Cyber Insurance
by Robin Roberson and Alex Pezold
Tokenization is the key to significantly reducing the likelihood of a cyber event resulting in a claim.
Despite the troubling persistence of cybercrime, many organizations are not doing everything they can to protect themselves from the serious threat of data breaches and other cyberattacks. A Spiceworks survey of 581 IT professionals showed that 62% of organizations did not have cyber insurance policies. This can be attributed to a slew of reasons, but perhaps the most perplexing one is a lack of reliable policy offerings.
As demonstrated by the percentage of uninsured organizations, the market for cyber insurance is essentially untapped. According to the Insurance Journal, 71% of the market for cyber insurance belonged to just 10 writers in 2018, and the National Association of Insurance Commissioners reported that only 500 companies offered cyber insurance in 2016, compared with nearly 6,000 offering commercial insurance. Additionally, a Ponemon Institute study of more than 1,000 IT professionals showed 80% of those surveyed said they believed it was likely that a successful cyberattack on their organization would occur within 12 months. Clearly, the need for cyber insurance exists. It just isn’t being addressed.
The reason for this is that cyber insurance is a relatively new policy area. In fact, it’s still so new that it lacks the standardized terms and pricing that are so essential for creating baselines for policies in other markets. And even when those policies are created, it can be difficult to determine what qualifies as cyber coverage. If a breach occurs due to a stolen password, for example, is that considered cyber, crime, theft or general liability? This confusion also can lead to insureds making cyber-loss claims under different policies, even if the insurer doesn’t offer cyber insurance—underlining the importance of creating well-defined cyber policies to protect policyholders and insurers alike.
This lack of established policy structure leads to uncertainty about how policies should be written, making it difficult for companies to confidently guard themselves against losses. As a result, many companies don’t offer cyber insurance because they’re unsure how to properly quantify risk and, in turn, price policies. This apprehension is understandable. It’s difficult and risky to try to provide estimates without a sufficient amount of credible information from which to infer.
See also: Quest for Reliable Cyber Security
Still, cyber insurance is quickly becoming one of the most profitable and fastest-growing lines of coverage. Premiums increased by 8% in 2018 to $2 billion, and the market is projected to reach $14 billion by 2022. So, how does an insurance company find a way to understand cyber risks, calculate their costs and reliably predict the frequency of losses? By significantly reducing the likelihood of an event resulting in a claim.
As obvious as it might sound, it’s important to remember that insurance ultimately comes down to risk, and when that risk is significantly reduced—or virtually eliminated—it benefits both the provider and the policyholder. To accomplish this in the cybersecurity arena, companies should recommend insurers use risk-reducing technology, such as tokenization and encryption, to better guard the sensitive data they are trying to protect and to reduce the risk and likelihood of a data breach or other cyberattack. By leveraging these additional security processes, insurance companies can more accurately build policies, knowing the risk of damages from a data breach is effectively nonexistent.
Tokenization, such as that offered by the TokenEx Cloud Security Platform, especially excels at reducing risk through its use of pseudonymization and secure data vaults. Pseudonymization, also known as deidentification, is the process of desensitizing data to render it untraceable to its original data subject. It does so by replacing identifying elements of the data with a nonsensitive equivalent, or token, and storing the original data in a cloud-based data vault.
This does two things. First, it allows tokens to be stored in a business system for future use without interrupting crucial business-as-usual processes. Second, it virtually eliminates the risk of theft in the event of a data breach. Because there is no mathematical relationship between the token and its original data, tokens cannot be returned to their original form. Instead, when detokenization is required, the token is exchanged for the original data, which can be done only by the original tokenization system—there is no other way to obtain the original data from the token alone. So if a breach occurs, the exposed data is worthless to cybercriminals. The original, sensitive data sits undisturbed in a secure cloud data vault. In effect, no loss occurs.
Additionally, tokenization can further reduce risk by addressing many international regulatory compliance obligations. Influential privacy regulations such as the European Union’s General Data Protection Regulation and the California Consumer Privacy Act refer to tokenization specifically as an appropriate technical mechanism for protecting sensitive data. It also reduces the scope of Payment Card Industry Data Security Standard compliance by removing payment card information from organizations’ cardholder data environments. Because tokenization satisfies controls concerning the processing of sensitive data, it can prevent losses stemming from fines and other penalties as a result of noncompliance.
See also: Paradigm Shift on Cyber Security
So when determining how your company should write its cyber insurance policies, consider recommending tokenization as a risk-reducing step for policyholders. It’s a small upfront investment for them that can better protect their data, their policy and your ability to provide reliable coverage.