Tag Archives: sony

The Threat From ‘Security Fatigue’

There is no mistaking that, by now, most consumers have at least a passing awareness of cyber threats.

Two other things also are true: too many people fail to take simple steps to stay safer online; and individuals who become a victim of identity theft, in whatever form, tend to be baffled about what to do about it.

A new survey by the nonprofit Identity Theft Resource Center reinforces these notions. ITRC surveyed 317 people who used the organization’s services in 2017 and had experienced identity theft. The study was sponsored by CyberScout, which also sponsors ThirdCertainty. A few highlights:

  • Nearly half (48%) of data breach victims were confused about what to do.
  • Only 56% took advantage of identity theft protection services offered after a breach.
  • Some 61% declined identity theft services because of lack of understanding or confusion.
  • Some 32% didn’t know where to turn for help in event of a financial loss because of identify theft.

Keep your guard up

These psychological shock waves, no doubt, are coming into play yet again for 143 million consumers who lost sensitive information in the Equifax breach. The ITRC findings suggest that many Equifax victims are likely to be frightened, confused and frustrated — to the point of acquiescence. That’s because the digital lives we lead come with risks no one foresaw at the start of this century. And the reality is that consumers need to be constantly vigilant about their digital life. However, cyber attacks have become so ubiquitous that they’ve become white noise for many people.

See also: Quest for Reliable Cyber Security  

The ITRC study is the second major report showing this to be true. Last fall, a majority of computer users polled by the National Institute of Standards and Technology said they experienced “security fatigue” that often correlates to risky computing behavior they engage in at work and in their personal lives.

The NIST report defines “security fatigue” as a weariness or reluctance to deal with computer security. As one of the study’s research subjects said about computer security, “I don’t pay any attention to those things anymore. … People get weary from being bombarded by ‘watch out for this or watch out for that.’”

Cognitive psychologist, Brian Stanton, who co-wrote the NIST study, observed that “security fatigue … has implications in the workplace and in peoples’ everyday life. It is critical because so many people bank online, and since health care and other valuable information is being moved to the internet.”

Make no mistake, identity theft is a huge and growing problem. Some 41 million Americans have already had their identity stolen — and 50 million reported being aware of someone else who was victimized, according to a Bankrate.com survey.

Attacks are multiplying

With sensitive personal data for the clear majority of Americans circulating in the cyber underground, it should come as no surprise that identity fraud is on a rising curve. Between January 2016 and June 2016, identity theft accounted for 64% of all data breaches, according to Breach Level Index. One reason for the rise was a huge jump in internet fraud. Card not present (CNP) fraud leaped by 40% in 2016, while point of sale (POS) fraud remained unchanged.

It’s not just weak passwords and individual errors that are fueling the rise in online fraud. Organizations we all trust with our personal information are being attacked every single day. The massive breach of financial and personal history data for 143 million people from credit bureau Equifax is just the latest example.

Over the past four years, there have been a steady drumbeat of major data breaches: Target, Home Depot, Kmart, Staples, Sony, Yahoo, Anthem, the U.S. Office of Personnel Management and the Republican National Committee, just to name a few. The hundreds of millions of records stolen never perish; they will continue in circulation in the cyber underground, available for sale and/or to be used in the next innovative fraud campaign.

Be safe, not sorry

Protecting yourself online doesn’t have to be difficult or complicated. Here are seven ways to better protect your privacy and your identity today:

  • Freeze your credit rating at the big three rating agencies so scammers can’t use your identity to take out loans or credit cards
  • Add a website grader to your browser to avoid malware
  • Enroll in ID theft coverage with your bank, insurer or employer —it could be free or surprisingly inexpensive
  • Get and use a password vault so you can create and use hard-to-guess passwords
  • Be knowledgeable about common cyber scams
  • Add a verbal password to your bank account login and set up text alerts to unusual activity
  • Come up with a consistent way to decide whether it’s safe to click on something.

There is a bigger implication of losing sensitive information as an individual: it almost certainly will have a negative ripple effect on your family, friends and colleagues. There is a burden on consumers to be more active about cybersecurity, just as there is a burden on companies to make it easier for individuals to do so.

See also: Cybersecurity: Firms Are Just Sloppy  

NIST researcher Stanton describes it this way: “If people can’t use security, they are not going to, and then we and our nation won’t be secure.”

Melanie Grano contributed to this story.

How to Determine Your Cyber Coverage

Public agencies and organizations around the world are making cyber risk their top priority. North American policyholders dominate the market, but Europe and Asia are expected to grow rapidly over the next five years due to new laws and significant increases in targeted attacks, such as ransomware. Various experts predict the $3 billion global cyber insurance market will grow two-, three- or even four-fold by 2020.

Deciding how much cyber insurance to buy is no inconsequential matter, and the responsibility rests squarely with the board of directors (BoD). Directors and executives should have the highest-level view of cyber risk across the organization and are best-positioned to align insurance coverage with business objectives, asset vulnerability, third-party risk exposure and external factors.

See also: New Approach to Cyber Insurance  

So, how much does your organization stand to lose from a supply chain shut down, a web site outage or service downtime?

Recent data points from breach investigations help frame the discussion around risks and associated costs. Following a variety of high-profile breaches helps ensure that your projected coverage requirements match up with reality. Be sure to follow older cases for deeper insight into the full expense compared with insurance payout; related costs and losses are often incurred for years afterward due to customer and market response as well as legal and regulatory enforcement actions.

In 2013, Target suffered a very public breach that resulted in the resignation of the CEO, a 35-year employee. Target had purchased $100 million in cyber insurance, with a $10 million deductible. At last count, Target reported that the breach costs totaled $252 million, with some lawsuits still open.

Home Depot announced in 2014 that between April and September of that year cyber criminals stole an estimated 56 million debit and credit card numbers – the largest such breach to date. The company had procured $105 million in cyber insurance and reported breach-related expenses of $161 million, including a consumer-driven class action settlement of $20 million.

These cases illustrate the need for thoughtful discussion when deciding how much breach insurance to buy. Breach fallout costs depend on multiple factors, are not entirely predictable and can rise quickly due to cascading effects. Cases in point: the bizarre events surrounding Sony’s breach and the post-breach evisceration of Yahoo’s pending deal with Verizon.

Organizations need to review their security posture and threat environment on a regular basis and implement mechanisms for incessant improvement. The technology behind cyber security threats and countermeasures is on a sharp growth curve; targets, motives and schemes shift unpredictably. Directors may find it useful to assess risk levels and projected costs for multiple potential scenarios before cyber insurance amounts are decided upon.

Most policy premiums are currently based on self-assessments. The more accurate the information provided in your application, the more protected the organization will be. Most policies stipulate obligations the insured must meet to qualify for full coverage; be sure to read the fine print and seek expert advisement.

A professional security assessment can pinpoint areas in need of improvement. If you claim to be following specific protocols, but a post-breach investigation finds they were poorly implemented, circumvented or insufficiently monitored, the insurer may deny or reduce coverage. Notify your insurance provider immediately about significant changes to your security program.

Review policy details regularly to ensure they match prevailing threats and reflect the evolution of crimeware and dark web exploits. Cyber insurance carriers continually adjust their offerings based on risk exposure and litigation outcomes.

See also: Promise, Pitfalls of Cyber Insurance  

As the industry matures, cyber insurance policies will become more standardized. For now, it’s an evolving product in a dynamic market; boards and executives need to keep an eye on developments. Simultaneously, they must maintain a high degree of visibility across their security program. Checking off compliance requirements, writing policies and purchasing security software isn’t sufficient.

My advice is to lead from the top. Organizations need to ensure risk assessments are thorough and up-to-date, policies are communicated and enforced and security technology is properly configured, patched and monitored.

Turning a blind eye to cyber threats and organizational vulnerabilities can have disastrous consequences. Cyber insurance may soften the financial blows, but it only works in conjunction with an enterprise-wide commitment to security fundamentals and risk management.

New Approach to Cyber Insurance

The most active players in the fledgling but fast-growing cyber insurance market are hustling to differentiate themselves.

The early adopters and innovators are doing so by accelerating the promotion of value-added services—tools and systems that can help companies improve their security postures and thus reduce the likelihood of ever filing a cyber damages claim.

As more businesses look to purchase cyber liability policies, insurance sellers are striving to dial up the right mix of such services, a blend that can help them profitably meet this pent-up demand without taking on too much risk.

The incentive is compelling: Consultancy PricewaterhouseCoopers estimates that the cyber insurance market will grow from about $2.5 billion in 2014 to $7.5 billion by 2020. European financial services giant Allianz goes a step further with its prediction that cyber insurance sales will top $20 billion by 2025.

This anticipated growth in demand for cyber liability coverage—coupled with the comparatively low level of loss claims—has created strong competition in this nascent market.

The Insurance Information Institute estimated last year that about 60 companies offered standalone cyber liability policies. In total, more than 500 insurers provide some form of cyber risk coverage, according to a recent analysis by the National Association of Insurance Commissioners.

“There are quite a few players, so they are looking for ways to differentiate themselves and find competitive edges,” says David K. Bradford, co-founder and chief strategy officer for Advisen, an insurance research and analysis company.

Insurance companies make adjustments

Insurance carriers hot after a piece of this burgeoning market are beginning to offer value-added services to make their cyber offerings stand out.

See also: 8 Points to Consider on Cyber Insurance  

Rather than growing these services in-house, most are partnering with vendors and consultants that specialize in awareness training, network security and data protection. Services that boost the value of cyber policies are being supplied for free, or offered at a discount.  Typical cyber insurance valued-added services include:

  • Phishing and cyber hygiene awareness training
  • Incidence response planning
  • Security risk assessments
  • Best practices web portals and software-as-a-service tools
  • Threat detection services
  • Employee and customer identity theft coverage
  • Breach response services

One measure of value-added services gaining traction comes from the Betterley Report, which recently surveyed 31 carriers that offer cyber policies. Betterley found that about half offered “active avoidance services,” while nearly all offered some sort of pre-breach planning tools.

Rick Betterley, president of Betterley Risk Consultants, which publishes the Betterley Report, says there is still a long way to go. “There’s much more that can be done to help the insureds be better protected,” he says.

Betterley is a big proponent of adding risk-management services to cyber policies. He calls the approach Cyber 3.0, adding that it’s akin to the notion of insuring a highly protected risk in a property insurance policy. Cyber value-added services, he says, are the equivalent of fire insurance companies requiring sprinklers.

“It’s not required that insurance companies provide the services, but it’s required that they help insureds identify what services are likely to generate a reduction in premiums,” Betterley says.

Sector faces new challenges

That said, the cyber insurance sector is still finding its way. With auto crashes, fire or natural disasters, losses are well defined and fully understood. Cyber exposures, by contrast, are hard to pin down. Network vulnerabilities are extremely complex and continually evolving. And historic data on insurance claims related to data breaches remains, at least for the moment, in short supply.

An added challenge, Betterley says, is that insurance companies are unable to satisfactorily measure the effectiveness of security technologies and services in preventing a data breach.

Advisen’s Bradford agrees. “It’s a rapidly evolving area that changes day to day, and underwriters are definitely wary of recommending a particular vendor or approach,” he says.

Eventually, the insurance industry will figure out how to make meaningful correlations and separate the wheat from the chaff.

“In bringing in these value-added services, we can help shore up some of those areas where we’re seeing human error,” observes Dave Wasson, cyber liability practice leader at Hays Cos., a commercial insurance brokerage and risk management consultancy. “We’ll be at a point where we’ll know what makes a difference, and we can put our money, time and efforts into those solutions.”

Eric Hodge, director of consulting at IDT911 Consulting, part of IDT911, which underwrites ThirdCertainty.com, concurs. One ironic result of the recent spike of ransomware attacks aimed at businesses, Hodge says, is that more hard data is getting generated that is useful for calculating loss profiles.

See also: Another Reason to Consider Cyber Insurance  

Along the same lines, settlements of class-action lawsuits related to breaches of high-profile retailers, such as Target and Sony, is helping amass data that will help the industry flesh out evolving actuarial tables.

“Losses from cyber attacks and data breaches are becoming easier to quantify,” Hodge says. “And market forces are absolutely lining up to reward the wider use of these activities. It’s harder to ignore the fiscal argument for an insurer to go the extra mile in helping the insured organizations make sure that a costly breach doesn’t occur.”

AIG blazes trail

One notable proponent leading the way is multinational insurance giant AIG, which is nurturing partnerships with about a half-dozen cybersecurity vendors.

AIG services—some of which are offered to policyholders at no cost—range from threat intelligence and cyber risk maturity assessments to active detection and vulnerabilities assessments.

RiskAnalytics, one of AIG’s partner vendors, provides threat intelligence services, including a service that detects and shuns blacklisted IP addresses. Any AIG insured with a minimum $5,000 policy can participate at no additional cost.

The company’s partnership is exclusive to AIG, and appears to be very popular.

“We’re bringing in multiyear contracts, and the average sales price is on an impressive trajectory,” says RiskAnalytics Chief Operative Officer Kurt Lee. “It’s all born out of (customers) using that (introductory) service through the policy.”

Recognizing the trend, more vendors are seizing the opportunity to market their services to insurance carriers.

Vendors are willing to jump through the many hoops because a partnership with an insurance company is an opportunity to get a soft introduction to a potential client, says Mike Patterson, vice president of strategy at Rook Security, a managed security services provider (MSSP) that is reaching out to carriers.

Dismantling roadblocks

As with any new approach, broad adoption of cyber insurance value-added services isn’t without hurdles. One major obstacle is the “’this-isn’t-how-we’ve-always-done-it’ way of thinking,” says IDT911’s Hodge. “It’s like trying to change our election processes—people resist altering a system that has been in place for a couple hundred years.”

Another barrier is cost. Insurance companies tend to reserve free or discounted added services for heavyweight clients that spend small fortunes on annual premiums, says John Farley, vice president and cyber risk practice leader at insurance brokerage HUB International.

“Carriers can’t give away a lot of resources, so the smaller premium payers are not getting a lot of these services,” Farley says. “But if they can streamline and automate resources and figure out how to get customizable, usable information to the insurance buyer, that insurance carrier will probably stand out.”

Brian Branner, RiskAnalytics’ executive vice president, says that’s exactly one of the benefits that AIG derives from their partnership.

“If we can get the insureds to use the services we provide, we should lower AIG’s loss ratio because they’ll be safer organizations, and AIG should receive less claims,” he says.

Hidden costs of a breach can affect a large enterprise for years, and prove catastrophic to a small business. So insurance companies in the vanguard are looking to find business clients that are taking information security seriously.

See also: The State of Cyber Insurance  

As more companies buy cyber policies, and use any attendant services, the result could be a halo effect, says IDT911’s Hodge.

“This is certainly something that the insurers are counting on,” Hodge says. “A more secure buyer is a lower actuarial risk to the insurer.”

Meanwhile, policyholders should steadily become better equipped to securely do business in an internet-centric economy riddled with evolving exposures.

Hodge says: “In my experience, the buyer is often pleasantly surprised by the improvement that can come about quickly in terms of knowing their risk, being compliant with their industry standards and being able to indicate to the marketplace that they are taking good care of their customer’s information.”

This post originally appeared on ThirdCertainty. It was written by Rodika Tollefson.

What Liabilities Do Robots Create?

The intersection of humanity and robots is being transported from the human imagination and formed into a tangible reality. Many books and movies like iRobot and Her have analyzed various potential impacts of that intersection, but the complete intersection will actually be that of humanity, robots and liability.

It is insufficient, however, to know that advanced robotics and liability will intersect. Advanced robotics is going to thrust upon insurers a world that is extremely different from the one they sought to indemnify in the 20th century. Already, drones and autonomous vehicles are forcing some parts of the insurance sector to try to determine where responsibility exists so that liability can be appropriately assigned, and those efforts will continue for at least the next decade.

The liability created by the combination of robots operating with humanity now falls on commercial, and especially professional, insurers to engineer robotic liability products to provide clients and the global economy with stability, while providing insurers a valuable stream of revenue.

There are some ground rules that must be considered before bringing robotic liability to life. First, what is the definition of a robot? For the purposes of this paper, Professor Ryan Calo’s definition of a robot will be used. According to the professor, a robot can sense, process and act on its environment. There is also the realization that currently it may be beyond human ability to create a unified robotic liability doctrine for insurance purposes. This is largely due to the environments in which robots will exist, as well as the ramifications of those environments from a legal, physical and practical standpoint. After all, drones capable of sustained flight are inherently going to exist in a different realm from ground-based autonomous vehicles, and the same is true for robots capable of sub-orbital and intra-planetary flight. Therefore, this paper is going to focus on a discrete part of robotic liability: those robots used in agricultural fields. Another reason for focusing on one area of robotics is to keep things simple while exploring this uncharted part of the insurance sector.

See also: Here Comes Robotic Process Automation

The farmer, the field and the harvest, the most commonplace of settings, provide an area where dimensions of robotic liability can be easily analyzed and understood. Plant husbandry draws on thousands of years of human knowledge, and it is already using aerial drones and big data analytics to maximize crop yields. Additionally, the agricultural arena has a high likelihood of being an area wherein robots cause significant shifts in multiple areas of the economy.

Within the next two or three years, a robot, like this paper’s fictional AARW (autonomous agriculture robotic worker), will be created and sent to the fields to begin to replace human labor when it comes time to harvest a crop. There are multiple reasons for this belief, starting with the advance of robotic technology. In 2015 the DARPA Robotics Challenge was held, and it demonstrated the deployment of an array of robots that will be the ancestors of a robot like AARW. In that competition, robots were required to walk on uneven terrain, accomplish tactile tasks and even drive a traditional vehicle. While the robots in that challenge were not largely or fully autonomous, they are the undeniable major step toward productive autonomous robots.

There are already simple machines that can perform a variety of functions, even learning a function by observing human movements, and the gap between the drawing board and reality is being quickly eroded with the tremendous amount of computer hardware and software knowledge that is produced by both private and public institutions each month.

Moreover, there are strong labor and economic incentives for the introduction of robots into the agricultural field. Robots are able to work non-stop for 12 hours, are free from any form of health and labor laws and can have life expectancies in the five- to 15-year range. Crops are, more often than not, planted in fields with straight rows and require only the robotic ability to pickup an item, like a watermelon, take it to a bin, deposit the melon in the bin and then repeat the same steps on the next watermelon. All this requires only a modest amount of know-how on the robot’s part.

If AARW is built to industrial quality standards, then it will only require a minimal amount of maintenance over the course of each year. And if AARW is powered using solar panels, then the cost of its fuel will be included in the robot’s purchase price, which means that the minor maintenance cost along with a possible storage cost will be the only operating costs of AARW. With its ability to work non-stop and with no overhead costs for complying with human health and labor laws, AARW will be a cheaper alternative to human workers, providing a strong economic incentive for farmers to use robots in the field.

An agricultural robot will, however, create unique exposures for a farmer, and those exposures will cultivate the need for robotic liability. Arguments can be made for completed operations/product liability and technology E&O exposures with AARW in the field. However, there are multiple reasons why it would be unwise to try to relegate liability for AARW to any current product.

First and foremost, there is a strong expectation among scholars and legal experts that robots are going to do unexpected things. Imagine: At harvest time, the farmer brings AARW to the field to collect the crop of watermelons. The field happens to be near a highway on which big rigs travel, and part of the field lies next to a blind corner in the highway. As AARW successfully harvests one row after another, the farmer’s attention drifts, and she begins talking with a neighbor. Suddenly, there is a screech of tires and a loud bang as a big rig slams into AARW, which, for an unknown reason, walked into the highway.

Who should bear responsibility for the untimely demise of AARW?

If AARW were a cow, then the insurer of the big rig would have to reimburse the farmer for the loss of one of her cows. In certain respects, AARW and a cow are the same in that they can sense, process and act upon their environment. However, a cow has what is often described as a mind of its own, which is why insurance companies and the law have come to place the fault of a rogue cow on the unwitting vehicle operator instead of the aggrieved farmer.

AARW, though, is not a cow. It is a machine created to harvest produce. Does the software that controls the robot’s actions equate to the free will of an animal, like a cow? The farmer who lost the cow does not demand her money back from the rancher who sold her a reckless bovine product. Why should the creator of the robot be expected to reimburse the farmer for the loss of AARW? How does it make sense for product liability to come into play when the rancher shares no blame for the indiscreet cow? Technology companies have been extremely successful at escaping liability for the execution of poorly crafted software, so the farmer is unlikely to find any remedy in bringing a claim against the provider of the software, even if it is a separate entity from the one that assembled AARW.

Regardless of where blamed is assigned, the issue would be awkward for insurers that tried to force the liability for the robot’s actions into any current insurance product. At worst, the farmer would not be made whole (technology E&O), and, at best, changing existing laws would likely only partially compensate the farmer for the loss of AARW.

See also: The Need to Educate on General Liability  

The liability waters are already murky without robotic liability. Machine learning will likely create situations that are even more unexpected than the above possibility. Imagine if AARW imitated the farmer in occasionally giving free produce samples to people passing the field. In the absence of robotic liability insurance, who should be responsible for a mistake or offending action on the robot’s part?

It would be unfortunate to place all of the blame on AARW or the farmer. The situations also call into question the quality of programming with which the robot was created. In the paper by M.C. Elise and Tim Hwang, “Praise the Machine! Punish the Human!” historical evidence makes it unwise to expect liability to be appropriately adjudicated were a farmer to sue the creator of AARW.

With an autonomous robot like AARW, it is possible to bring into consideration laws related to human juveniles. A juvenile is responsible if she decides to steal an iPad from a store, but, if she takes the family Prius for a joyride, then the parents are responsible for any damage the juvenile causes. Autonomous robots will inherently be allowed to make choices on their own, but should responsibility apply to the robot and the farmer as it does in juvenile law for a child and a parent?

From the insurer’s standpoint it makes sense to assign responsibility to the appropriate party. If AARW entered a highway, the responsibility should fall on the farmer, who should have been close enough to stop it. Giving away produce, which could be petty thievery, is wrong and, because AARW incorrectly applied an action it learned, it remains largely responsible.

To more fairly distribute blame, it may be worthwhile for robotic liability to contain two types of deductible. One would be the deductible paid when 51% of the blame were due to human negligence, and such a deductible would be treble the second deductible that would apply if 51% of the blame were due to an incorrect choice on the robot’s part. This would help to impress on the human the need to make responsible choices for the robot’s actions, while also recognizing that robots will sometimes make unexpected choices, choices that may have been largely unforeseeable to human thinking. Such assignment of responsibility should also have a high chance of withstanding judicial and underwriting scrutiny.

Another disservice to relegating robots to any existing form of liability is in the form of underwriting expertise. Currently, most insurers that offer cyber liability and technology E&O seem to possess little expertise about the intersection of risk and technology. That lack hurts insurers and their clients, who suffer time and again from inadequate coverage and unreasonable pricing. It would be advantageous to create robotic liability that would be unencumbered by such existing deficiencies. By establishing a new insurance product and entrusting it to those who do understand the intersection of humans, liability and robots, insurers will be able to satisfy the demands of those who seek to leverage robots while also establishing a reliable stream of new revenue.

A 21st century product ought to be worthy of a 21st century insurance policy.

Another aspect of exposure that needs to be considered is in how a robot is seen socially, something that professor Calo discusses in his paper “Robotics and the Lessons of Cyberlaw.” Robots are likely to be viewed as companions, or valued possessions, or perhaps even friends.

At the turn of the last century, Sony created an experimental robotic dog named Aibo. Now a number of Aibos are enjoying a second life due to the pleasure people in retirement homes experience when interacting with them. One of the original Sony engineers created his own company just to repair dysfunctional Aibos.

While that particular robot is fairly limited in its interactive abilities, it provides an example of how willing people are to consider robots as companions instead of mechanical tools with limited value. It is more than likely that people will form social bonds with robots. And, while it is one thing to be verbally annoyed at a water pump for malfunctioning and adding extra work to an already busy day, mistreatment of a robot by its employer may be seen and felt differently by co-workers of the robot. Some people already treat a program like Apple’s Siri inappropriately. People to tell Siri that it is sexy, ask what it “likes” in a romantic sense and exhibit other behaviors toward the program, even in a professional setting, that are inappropriate. While such behavior has not resulted yet in an EPL (employment practices liability) claim, such unwarranted behavior may not be tolerated.

Consequently, the additional exposures created by a robot’s social integration into human society will more than likely result in adding elements to an insurance claim that products liability, technology E&O and other current insurance products would be ill-suited to deal with.

See also: Of Robots, Self-Driving Cars and Insurance

Advanced robotics makes some of the future murky. Will humans be able to code self-awareness into robots? Are droid armies going to create more horrific battlegrounds than those created by humans in all prior centuries? Are autonomous vehicles the key to essentially eliminating human fatalities?

However useful those kinds of questions are, the answer to each, for the foreseeable future, is unknown. What we do know for sure is that the realm of advanced robotics is starting to move from the drawing board and into professional work environments, creating unexplored liability territory. Accordingly, the most efficient way to go into the future is by creating robotic liability now because, with such a product, insurers have the ability to both generate a new stream of revenue while at the same time providing a more economically stable world.

Y2K Rears Its Head One More Time

In the late 1990s, in the run up to Jan. 1, 2000, insurers deployed Y2K or “electronic date recognition” exclusions into a multitude of insurance policies. The logic made sense: The Y2K date change was a known risk and something that firms should have worked to eliminate, and, if Armageddon did materialize, well, that’s not something that the insurance industry wanted to cover anyway.

Sixteen years later, one would expect to find Y2K exclusions only in the Lloyds of London “Policy Wording Hall of Fame.” But no so fast.

Electronic date recognition exclusions are still frequently included in a variety of insurance contracts, even though it’s doubtful that many folks have given them more than a passing glance while chuckling about the good old days. And now is the time to take a closer look.

Last month, various cybersecurity response firms discovered that a new variant of the Shamoon malware was used to attack a number of firms in the Middle East. In 2012, the original version was used to successfully attack Saudi Aramco and resulted in its needing to replace tens of thousands of desktop computers. Shamoon was used shortly thereafter to attack RasGas, and, most notoriously, the malware was used against Sony Pictures in late 2014. Shamoon has caused hundreds of millions of dollars of damages.

The new version, Shamoon v2, changes the target computer’s system clock to a random date in August 2012 — according to research from FireEye, the change may be designed to make sure that a piece of software subverted for the attack hasn’t had its license expire.

This change raises issues under existing electronic date recognition exclusions because many are not specifically limited to Jan. 1, 2000; they instead feature an “any other date” catch all. For example, one of the standard versions reads, in part:

“This Policy does not cover any loss, damage, cost, claim or expense, whether preventative, remedial or otherwise, directly or indirectly arising out of or relating to any change, alteration, or modification involving the date change to the year 2000, or any other date change, including leap year calculations, to any such computer system, hardware, program or software and/or any microchip, integrated circuit or similar device in computer equipment or non-computer equipment, whether the property of the Insured or not.”

See also: Insurance Is NOT a Commodity!  

By our estimation, this exclusion is written broadly enough to exclude any losses resulting from a Shamoon v2 attack, if indeed the malware’s success is predicated on the change in system dates to 2012.

Given that the types of losses that Sony and Saudi Aramco suffered can be insured, firms shouldn’t be caught off guard. We advise a twofold approach: Work with your insurance broker to either modify language or consider alternative solutions; and ensure that your cybersecurity leaders are monitoring your systems for indicators of compromise, including subtle measures like clock changes.