Tag Archives: cyber coverage

5 Questions to Ask About Cyber

Cyber security placed first in a list of emerging casualty risks among insurance buyers, according to a survey of 135 insurance professionals conducted by London-based specialty lines broker RKH Specialty. 70% of respondents put cyber risk in the top spot. According to a Best’s News Service article about the survey, healthcare and retailers have been the major buyers. Logic will tell you that the reason for the growing demand for specialized cyber coverage is the simple fact that losses stemming from cyber-related attacks and business interruption can be catastrophic.

Of course, not all policies are created equal, so here are some things to consider when purchasing cyber security coverage to help ensure that policyholders are adequately protected from the losses after a cyber attack.

#1 If your business has a cyber attack, will your operations cease or be interrupted? If so, you need to make sure the cyber coverage you procure has “business interruption coverage.”

#2 Does your cloud contract stipulate that your third-party cloud vendor must meet all the federal regulatory requirements in encrypting personally identifiable information (PHI) and healthcare records? If not, you need to verify how the third-party vendor is protecting your employees’ and patients’ information from cyber attacks and whether its cyber coverage will protect you.

#3 Do all mobile devices – such as smartphones and tablets – have proper encryption software to protect personally identifiable information and healthcare records? HIPAA security regulations require healthcare providers to use encryption as a means of protection for their patients electronic PHI. If they don’t do so, healthcare providers can be heavily penalized by federal regulators. Most cyber policies have a stipulation that, to be covered, all insureds must adhere to the most recent encryption requirements for electronic protected health information (ePHIs).

#4 Does your legal counsel have experience responding to cyber attacks? Businesses often have their own attorneys and use them frequently for everyday operations. However, the likelihood is that the in-house counsel does not specialize in the legalities of cyber attacks. Having an attorney who specializes in data breaches can make the process run more smoothly and ensure that important details are not missed or mishandled – such as notifying regulatory agencies, properly setting up notification of employees and patients as well as advising PR staff on all media inquiries and other external communications.

#5 Does your business have an expert consultant they can call on to make recommendations on cyber coverage or risk management strategies to reduce the risk of attacks – or to help manage the crisis after an attack? Enlisting the help of a cyber-liability expert and mapping out a plan can help mitigate the potentially catastrophic losses related to a data breach event.

Cyber: A Huge and Still-Untapped Market

Cyber insurance is a potentially huge, but still largely untapped, opportunity for insurers and reinsurers. We estimate that annual gross written premiums are set to increase from around $2.5 billion today to reach $7.5 billion by the end of the decade.

Businesses across all sectors are beginning to recognize the importance of cyber insurance in today’s increasingly complex and high-risk digital landscape. In turn, many insurers and reinsurers are looking to take advantage of what they see as a rare opportunity to secure high margins in an otherwise soft market. Yet many others are still wary of cyber risk. How long can they remain on the sidelines? Cyber insurance could soon become a client expectation, and insurers that are unwilling to embrace it risk losing out on other business opportunities.

In the meantime, many insurers face considerable cyber exposures within their technology, errors and omissions, general liability and other existing business lines. The immediate priority is to evaluate and manage these “buried” exposures.

Critical exposures

Part of the challenge is that cyber risk isn’t like any other risk that insurers and reinsurers have ever had to underwrite. There is limited publicly available data on the scale and financial impact of attacks. The difficulties created by the minimal data are heightened by the speed with which the threats are evolving and proliferating. While underwriters can estimate the likely cost of systems remediation with reasonable certainty, there simply isn’t enough historical data to gauge further losses resulting from brand impairment or compensation to customers, suppliers and other stakeholders.

A UK government report estimates that the insurance industry’s global cyber risk exposure is already in the region of £100 billion ($150 billion), more than a third of the Centre for Strategic and International Studies’ estimate of the annual losses from cyber attacks ($400 billion). And while the scale of the potential losses is on a par with natural catastrophes, incidents are much more frequent. As a result, there are growing concerns about both the concentrations of cyber risk and the ability of less experienced insurers to withstand what could become a fast sequence of high-loss events.

Insurers and reinsurers are charging high prices for cyber insurance relative to other types of liability coverage to cushion some of the uncertainty. They are also seeking to put a ceiling on their potential losses through restrictive limits, exclusions and conditions. However, many clients are starting to question the real value these policies offer, which may restrict market growth.

Insurers and reinsurers need more rigorous and relevant risk evaluation built around more reliable data, more effective scenario analysis and partnerships with government, technology companies and specialist firms. Rather than simply relying on blanket policy restrictions to control exposures, insurers should make coverage conditional on regular risk assessments of the client’s operations and the actions they take in response to the issues identified in these regular reviews. The depth of the assessment should reflect the risks within the client’s industry sector and the coverage limits.

This more informed approach would enable your business to reduce uncertain exposures while offering the types of coverage and more attractive premium rates clients want. Your clients would, in turn, benefit from more transparent and cost-effective coverage.

Opportunities for Growth

There is no doubt that cyber insurance offers considerable opportunity for revenue growth.

An estimated $2.5 billion in cyber insurance premium was written in 2014. Some 90% of cyber insurance is purchased by U.S. companies, underlining the size of the opportunities for further market expansion worldwide.

In the UK, only 2% of companies have standalone cyber insurance. Even in the more penetrated U.S. market, only around a third of companies have some form of cyber coverage. There is also a wide variation in take-up by industry, with only 5% of manufacturing companies in the U.S. holding standalone cyber insurance, compared with around 50% in the healthcare, technology and retail sectors. As recognition of cyber threats increases, take-up of cyber insurance in under-penetrated industries and countries continues to grow, and companies face demands to disclose whether they have cyber coverage (examples include the U.S. Securities and Exchange Commission’s disclosure guidance).

We estimate that the cyber insurance market could grow to $5 billion in annual premiums by 2018 and at least $7.5 billion by 2020.

There is a strong appetite among underwriters for further expansion in cyber insurance writings, reflecting what would appear to be favorable prices in comparison with other areas of a generally soft market — the cost of cyber insurance relative to the limit purchased is typically three times the cost of cover for more-established general liability risks. Part of the reason for the high prices is the still limited number of insurers offering such coverage, though a much bigger reason is the uncertainty around how much to put aside for potential losses.

Many insurers are also setting limits below the levels sought by their clients (the maximum is $500 million, though most large companies have difficulty securing more than $300 million). Insurers may also impose restrictive exclusions and conditions. Some common conditions, such as state-of-the-art data encryption or 100% updated security patch clauses, are difficult for any business to maintain. Given the high cost of coverage, the limits imposed, the tight attaching terms and conditions and the restrictions on whether policyholders can claim, many policyholders are questioning whether their cyber insurance policies are delivering real value. Such misgivings could hold back growth in the short term. There is also a possibility that overly onerous terms and conditions could invite regulatory action or litigation against insurers.

Cyber Sustainability

We believe there are eight ways insurers, reinsurers and brokers could put cyber insurance on a more sustainable footing and take advantage of the opportunities for profitable growth:

1. Judging what you could lose and how much you can afford to lose

Pricing will continue to be as much of an art as a science in the absence of robust actuarial data. But it may be possible to develop a much clearer picture of your total maximum loss and match this against your risk appetite and risk tolerances. This could be especially useful in helping your business judge what industries to focus on, when to curtail underwriting and where there may be room for further coverage.

Key inputs include worst-case scenario analysis for your particular portfolio. If your clients include a lot of U.S. power companies, for example, what losses could result from a major attack on the U.S. grid? A recent report based around a “plausible but extreme” scenario in which a sophisticated group of hackers were able to compromise the U.S. electrical grid estimated that insurance companies would face claims ranging from $21 billion to $71 billion, depending on the size and scope of the attack. What proportion of these claims would your business be liable for? What steps could you take now to mitigate the losses in areas ranging from reducing risk concentrations in your portfolio to working with clients to improve safeguards and crisis planning?

2. Sharpen intelligence

To develop more effective threat and client vulnerability assessments, it will be important to bring in people from technology companies and intelligence agencies. The resulting risk evaluation, screening and pricing process would be a partnership between your existing actuaries and underwriters, focusing on the compensation and other third-party liabilities, and technology experts who would concentrate on the data and systems area. This is akin to the partnership between CRO and CIO teams that are being developed to combat cyber threats within many businesses.

3. Risk-based conditions

Many insurers now impose blanket terms and conditions. A more effective approach would be to make coverage conditional on a fuller and more frequent assessment of the policyholder’s vulnerabilities and agreement to follow advised steps. This could include an audit of processes, responsibilities and governance within your client’s business. It could also include threat intelligence assessments, which would draw on the evaluations of threats to industries or particular enterprises, provided by government agencies and other credible sources. It could also include exercises that mimic attacks to test weaknesses and plans for response. As a condition of coverage, you could then specify the implementation of appropriate prevention and detection technologies and procedures.

Your business would benefit from a better understanding and control of the risks you choose to accept, hence lowering exposures, and the ability to offer keener pricing. Clients would in turn be able to secure more effective and cost-efficient insurance protection. These assessments could also help to cement a closer relationship with clients and provide the foundation for fee-based advisory services.

4. Share more data

More effective data sharing is the key to greater pricing accuracy. Client companies have been wary of admitting breaches for reputation reasons, while insurers have been reluctant to share data because of concerns over loss of competitive advantage. However, data breach notification legislation in the U.S., which is now set to be replicated in the EU, could help increase available data volumes. Some governments and regulators have also launched data sharing initiatives (e.g., MAS in Singapore or the UK’s Cyber Security Information Sharing Partnership). Data pooling on operational risk, through ORIC, provides a precedent for more industry-wide sharing.

5. Real-time policy update

Annual renewals and 18-month product development cycles will need to give way to real-time analysis and rolling policy updates. This dynamic approach could be likened to the updates on security software or the approach taken by credit insurers to dynamically manage limits and exposures.

6. Hybrid risk transfer

While the cyber reinsurance market is less developed than its direct counterpart, a better understanding of the evolving threat and maximum loss scenarios could encourage more reinsurance companies to enter the market. Risk transfer structures are likely to include traditional excess of loss reinsurance in the lower layers, with capital market structures being developed for peak losses. Possible options might include indemnity or industry loss warranty structures or some form of contingent capital. Such capital market structures could prove appealing to investors looking for diversification and yield. Fund managers and investment banks can bring in expertise from reinsurers or technology companies to develop appropriate evaluation techniques.

7. Risk facilitation

Given the ever more complex and uncertain loss drivers surrounding cyber risk, there is a growing need for coordinated risk management solutions that bring together a range of stakeholders, including corporations, insurance/reinsurance companies, capital markets and policymakers. Some form of risk facilitator, possibly the broker, will be needed to bring the parties together and lead the development of effective solutions, including the standards for cyber insurance that many governments are keen to introduce.

8. Build credibility through effective in-house safeguards

The development of effective in-house safeguards is essential in sustaining credibility in the cyber risk market, and trust in the enterprise as a whole. If your business can’t protect itself, why should policyholders trust you to protect them?

Banks have invested hundreds of millions of dollars in cyber security, bringing in people from intelligence agencies and even ex-hackers to advise on safeguards. Insurers also need to continue to invest appropriately in their own cyber security given the volume of sensitive policyholder information they hold, which, if compromised, would lead to a loss of trust that would be extremely difficult to restore. The sensitive data held by cyber insurers that hackers might well want to gain access to includes information on clients’ cyber risks and defenses.

The starting point is for boards to take the lead in evaluating and tackling cyber risk within their own business, rather than simply seeing this as a matter for IT or compliance.

See the full report here.

How to Keep Malware in Check

Firewalls are superb at deflecting obvious network attacks. And intrusion detection systems continue to make remarkable advances. So why are network breaches continuing at an unprecedented scale?

One reason is the bad guys are adept at leveraging a work tool we all use intensively every day: the Web browser. Microsoft Explorer, Mozilla Firefox, Google Chrome and Apple Safari by design execute myriad tiny programs over which network administrators have zero control. Most of this code execution occurs with no action required by the user. That’s what makes browsers so nifty.

A blessing and a curse

But that architecture is also what makes browsers a godsend for intruders. All a criminal hacker has to do is slip malicious code into the mix of legit browser executable code. And, as bad guys are fully aware, there are endless ways to do that.

Stay informed with a free subscription to SPWNR

The result: The majority of malware seeping into company networks today arrives via infectious code lurking on legit, high-traffic websites. The hackers’ game often boils down to luring victims to click to an infected site, or simply just waiting to see who shows up and gets infected.

So if browsers represent a wide open sieve to company networks, could inoculating browsers be something of a security silver bullet? A cadre of security start-ups laser-focused on boosting browser security is testing that notion. The trick, of course, is to do it without undermining usability.

spike

Branden Spikes, Spikes Security founder and CEO

ThirdCertainty recently sat down with one of these security innovators, Branden Spikes, to discuss the progress and promise of improving Web browser security. Spikes left his job as CIO of SpaceX, where he was responsible for securing the browsers of company owner Elon Musk’s team of rocket scientists, to launch an eponymous start-up, Spikes Security. (Answers edited for clarity and length.)

3C: The idea of making Web browsing more secure certainly isn’t new.

Spikes: Let me break it down by drawing a line between detection and isolation. Browser security has been attempted with detection for many, many years, and it’s proven to not work. McAfee, Symantec, Sophos, Kaspersky and all the anti-virus applications that might run on your computer became Web-aware a while back. They all try to use detection mechanisms to prevent you from going to bad places on the Web.

Then you have detection that takes place at secure Web gateways. Websense, Ironport (now part of Cisco), Blue Coat, Zscaler and numerous Web proxies out there have security features based on the concept of preventing you from going to places that look malicious or that are known to be bad. Well, hackers have figured out how to evade detection, so that battle has been lost.

3C: Okay, so you and other start-ups are waging the browser battle on a different front?

Spikes: When you realize that detection doesn’t work, now you have to isolate. You have to say, :You know, I don’t trust browsers anymore. Therefore, I’m not going to let my stuff interact with the Web directly.” In the past five years, newer products have started to offer browser isolation technology. We’ve taken a very no-compromise approach to isolation technology.

Free IDT911 white paper: Breach, Privacy, And Cyber Coverages: Fact And Fiction

3C: So instead of detecting and blocking you’re isolating, and sort of cleansing, browser interactions?

Spikes: Yes, and much like with detection technology, isolation can exist in either the endpoint or on the network. Some examples of endpoint isolation might be Invincea or Bromium, where you’ve got your sandboxes that do isolation on the endpoint. I applaud all the efforts out there. It spreads the whole gamut from minimal amount of isolation to sandbox technologies built into browsers. There’s quite a bit of investment going into this.

3C: Your approach is to intercept browser activity before it can execute on the worker’s computer.

Spikes: If you come at the problem from the assumption that all Web browsers are fundamentally malware, you can understand our technology. We essentially take the malware off the endpoint entirely, and we isolate the execution of Web pages on a purpose-built appliance. What goes to the end user is a very benign stream of images and sound. There’s really no way for malware to get across that channel.

3C: If browser security gets much better, at least in the workplace, how much will that help?

Spikes: If we successfully solve the browser malware problem, we could, I think, allow for more strategically important things to occur in cybersecurity. We could watch the other entry points that are less obvious. This sort of rampant problem with the browser may have taken some very important attention away from other entry points into the network: physical entry points, social engineering and some of the more dynamic and challenging types of attacks.

Unclaimed Funds Can Lead to Data Breaches

When it comes to privacy, not all states are alike. This was confirmed yet again in the 50 State Compendium of Unclaimed Property Practices we compiled. The compendium ranks the amount of personal data that state treasuries expose during the process by which individuals can collect unclaimed funds. The data exposed can provide fraudsters with a crime exacta: claiming money that no one will ever miss and gathering various nuggets of personal data that can help facilitate other types of identity theft. The takeaway: Some states provide way too much data to anyone who is in the business of exploiting consumer information.

For those who take their privacy seriously, the baseline of our compendium—inclusion in a list of people with unclaimed funds or property—may in itself be unacceptable. For others, finding their name on an unclaimed property list isn’t a huge deal. In fact, two people on our team found unclaimed property in the New York database (I was one of them) while putting together the 50-state compendium, and there were no panic attacks.

Free IDT911 white paper: Breach, Privacy and Cyber Coverages: Fact and Fiction

That said, there is a reason to feel uncomfortable—or even outright concerned—to find your name on a list of people with unclaimed property. After all, you didn’t give anyone permission to put it there. The way a person manages her affairs (or doesn’t) should not be searchable on a public database like a scarlet letter just waiting to be publicized.

Then there’s the more practical reason that it matters. Identity thieves rely on sloppiness. Scams thrive where there is a lack of vigilance (lamentably, a lifestyle choice for many Americans despite the rise of identity-related crimes). The crux of the problem when it comes to reporting unclaimed property: It’s impossible to be guarded and careful about something you don’t even know exists, and, of course, it’s much easier to steal something if you know that it does.

The worst of the state unclaimed property databases provide a target-rich environment for thieves interested in grabbing the more than $58 billion in unclaimed funds held by agencies at the state level across the country.

States’ response to questions about public database

When we asked for comment from the eight states that received the worst rating in our compendium—California, Hawaii, Indiana, Iowa, Nevada, South Dakota, Texas and Wisconsin—five replied. In an effort to continue the dialogue around this all-too-important topic, here are a few of the responses from the states:

— California said: “The California state controller has a fraud detection unit that takes proactive measures to ensure property is returned to the rightful owners. We have no evidence that the limited online information leads to fraud.”

The “limited online information” available to the public on the California database provides name, street addresses, the company that held the unclaimed funds and the exact amount owed unless the property is something with a movable valuation like equity or commodities. To give just one example, we found a $50 credit at Tiffany associated with a very public figure. We were able to verify it because the address listed in the California database had been referenced in a New York Times article about the person of interest. Just those data points could be used by a scammer to trick Tiffany or the owner of the unclaimed property (or the owner’s representatives) into handing over more information (to be used elsewhere in the commission of fraud) or money (a finder’s fee is a common ruse) or both.

This policy seems somewhat at odds with California’s well-earned reputation as one of the most consumer-friendly states in the nation when it comes to data privacy and security.

— Hawaii’s response: “We carefully evaluated the amount and type of information to be provided and consulted with our legal counsel to ensure that no sensitive personal information was being provided.”

My response: Define “sensitive.” These days, name, address and email address (reflect upon the millions of these that are “out there” in the wake of the Target and Home Depot breaches) are all scammers need to start exploiting your identity. The more information they have, the more opportunities they can create, leveraging that information, to get more until they have enough to access your available credit or financial accounts.

— Indiana’s response was thoughtful. “By providing the public record, initially we are hoping to eliminate the use of a finder, which can charge up to 10% of the property amount. Providing the claimant the information up front, they are more likely to use our service for free. That being said, we are highly aware of the fraud issue and, as you may know, Indiana is the only state in which the Unclaimed Property Division falls under the Attorney General’s office. This works to our advantage in that we have an entire investigative division in-house and specific to unclaimed property. In addition, we also have a proactive team that works to reach out to rightful owners directly on higher-dollar claims to reduce fraud and to ensure those large dollar amounts are reaching the rightful owners.”

Protect and serve should be the goal

While Indiana has the right idea, the state still provides too much information. The concept here is to protect and serve—something the current system of unclaimed property databases currently does not do.

The methodology used in the compendium was quite simple: The less information a state provided, the better its ranking. Four stars was the best rating—it went to states that provided only a name and city or ZIP code—and one star was the worst, awarded to states that disclosed name, street address, property type, property holder and exact amount owed.

In the majority of states in the U.S., the current approach to unclaimed funds doesn’t appear to be calibrated to protect consumers during this ever-growing epidemic of identity theft and cyber fraud. The hit parade of data breaches over the past few years—Target, Home Depot, Sony Pictures, Anthem and, most recently, the Office of Personnel Management—provides a case-by-case view of the evolution of cybercrime. Whether access was achieved by malware embedded in a spear-phishing email or came by way of an intentionally infected vendor, the ingenuity of fraudsters continues apace, and it doesn’t apply solely to mega databases. Identity thieves make a living looking for exploitable mistakes. The 50 State Compendium provides a state-by-state look at mistakes just waiting to be converted by fraudsters into crimes.

The best way to keep your name off those lists: Stay on top of your finances, cash your checks and keep tabs on your assets. (And check your credit reports regularly to spot signs of identity fraud. You can get your free credit reports every year from the major credit reporting agencies, and you can get a free credit report summary from Credit.com every month for a more frequent overview.) In the meantime, states need to re-evaluate the best practices for getting unclaimed funds to consumers. One possibility may be to create a search process that can only be initiated by the consumer submitting his name and city (or cities) on a secure government website.

How to Measure Data Breach Costs?

Businesses typically have a hard time quantifying potential losses from a data breach because of the myriad factors that need to be considered.

A recent disagreement between Verizon and the Ponemon Institute about the best approach to take for estimating breach losses could make that job a little harder.

For some time, Ponemon has used a cost-per-record measure to help companies and insurers get an idea of how much a breach could cost them. Its estimates are widely used.

The institute recently released its latest numbers showing that the average cost of a data breach has risen from $3.5 million in 2014 to $3.8 million this year, with the average cost per lost or stolen record going from $145 to $154.

Infographic: Data breaches drain profits

The report, sponsored by IBM, showed that per-record costs have jumped dramatically in the retail industry, from $105 last year to $165 this year. The cost was highest in the healthcare industry, at $363 per compromised record. Ponemon has released similar estimates for the past 10 years.

But, according to Verizon, organizations trying to estimate the potential cost of a data breach should avoid using a pure cost-per-record measure.

Free IDT911 white paper: Breach, Privacy, And Cyber Coverages: Fact And Fiction

ThirdCertainty spoke with representatives of both Verizon and Ponemon to hear why they think their methods are best.

Verizon’s Jay Jacobs

Ponemon’s measure does not work very well with data breaches involving tens of millions of records, said Jay Jacobs, Verizon data scientist and an author of the company’s latest Data Breach Investigations Report (DBIR).

Jacobs says that, when Verizon applied the cost-per-record model to breach-loss data obtained from 191 insurance claims, the numbers it got were very different from those released by Ponemon. Instead of hundreds of dollars per compromised record, Jacobs said, his math turned up an average of 58 cents per record.

Why the difference? With a cost-per-record measure, the method is to divide the sum of all losses stemming from a breach by the total number of records lost. The issue with this approach, Jacobs said, is that cost per record typically tends to be higher with small breaches and drops as the size of the breach increases.

Generally, the more records a company loses, the more it’s likely to pay in associated mitigation costs. But the cost per record itself tends to come down as the breach size increases, because of economies of scale, he said.

Many per-record costs associated with a breach, such as notification and credit monitoring, drop sharply as the volume of records increase. When costs are averaged across millions of records, per-record costs fall dramatically, Jacobs said. For massive breaches in the range of 100 million records, the cost can drop to pennies per record, compared with the hundreds and even thousands of dollars that companies can end up paying per record for small breaches.

“That’s simply how averages work,” Jacobs said. “With the megabreaches, you get efficiencies of scale, where the victim is getting much better prices on mass-mailing notifications,” and most other contributing.

Ponemon’s report does not reflect this because its estimates are only for breaches involving 100,000 records or fewer, Jacobs said. The estimates also include hard-to-measure costs, such as those of downtime and brand damage, that don’t show up in insurance claims data, he said.

An alternate method is to apply more of a statistical approach to available data to develop estimated average loss ranges for different-size breaches, Jacobs said

While breach costs increase with the number of records lost, not all increases are the same. Several factors can cause costs to vary, such as how robust incident response plans, pre-negotiated contracts for customer notification and credit monitoring are, Jacobs said. Companies might want to develop a model that captures these variances in costs in the most complete picture possible and to express potential losses as an expected range rather than use per-record numbers.

Using this approach on the insurance data, Verizon has developed a model that, for example, lets it say with 95% confidence that the average loss for a breach of 1,000 records is forecast to come in at between $52,000 and $87,000, with an expected cost of $67,480. Similarly, the expected cost for a breach involving 100 records is $25,450, but average costs could range from $18,120 to $35,730.

Jacobs said this model is not perfectly accurate because of the many factors that affect breach costs. As the number of records breached increases, the overall accuracy of the predictions begins to decrease, he said. Even so, the approach is more scientific than averaging costs and arriving at per-record estimates, he said.

Ponemon’s Larry Ponemon

Larry Ponemon, chairman and founder of the Ponemon Institute, stood by his methodology and said the estimates are a fair representation of the economic impact of a breach.

Ponemon’s estimates are based on actual data collected from individual companies that have suffered data breaches, he said. It considers all costs that companies can incur when they suffer a data breach and includes estimates from more than 180 cost categories in total.

By contrast, the Verizon model looks only at the direct costs of a data breach collected from a relatively small sample of 191 insurance claims, Ponemon said. Such claims often provide an incomplete picture of the true costs incurred by a company in a data breach. Often, the claim limits also are smaller than the actual damages suffered by an organization, he said.

“In general, the use of claims data as surrogate for breach costs is a huge problem, because it underestimates the true costs” significantly, Ponemon said.

Verizon’s use of logarithmic regression to arrive at the estimates also is problematic because of the small data size and the fact the data was not derived from a scientific sample, he said.

Ponemon said the costs of a data breach are linearly related to the size of the breach. Per-record costs come down as the number of records increases, but not to the extent portrayed by Verizon’s estimates, he said.

“I have met several insurance companies that are using our data to underwrite risk,” he said.