Tag Archives: liability

The Evolution in Self-Driving Vehicles

Although driverless cars will become mainstream in more than a decade, there are certain considerations that insurance executives should start thinking about now. We will continue to explore this evolving topic and suggest ways insurers can position themselves to take advantage of the enormous disruption that autonomous technology will cause to the business of risk. We will provide our perspectives on how the risks involved in transportation will be transformed, how financial responsibility will be assigned and how insurance products will need to be adapted – and how the key issues might be influenced by regulators and legislators.

In our view, insurers will face these five key challenges.

Challenge 1: What risks will remain – and will new ones arise?

A primary aim of autonomous technology is to reduce the number of traffic accidents, and the public’s and regulators’ expectations will be very high. We will examine what the residual risk of collisions could be and how the cost of injuries and repairs could change. We will offer our view on how new technologies will improve reporting of claims and change the potential for fraud.

At the same time, new risks will emerge, such as cyber attacks, software bugs and control failures. What will the exposure to systemic risks mean for insurability?

See also: Future of Self-Driving Cars (Infographic)

Challenge 2: Who is the customer, and how will we do business with that customer?

Who is liable for risk will be the key question, especially if a high proportion of remaining accidents will be attributable to failures in control software and systems. We will consider how original equipment manufacturers (OEMs) and manufacturers could become liable for claims in the future, and whether they can shift the legal or financial burden to others in the supply chain. For example, could vehicle end users be required to purchase policies to indemnify OEMs, or will the cost of product liability insurance be passed to new vehicle purchasers? If transportation is consumed on a pay-per-use basis, could insurance be wrapped into the charge?

Whatever the outcome, the current insurer-consumer relationship – along with marketing, sales and distribution methods – will be fundamentally altered. Retaining control over this relationship will be essential if insurers are to avoid becoming redundant or marginalized by other players.

Challenge 3: How will the insurance product have to change?

Changes in liability and use will necessitate major revisions to the insurance products to meet the market’s needs. We will examine how autonomous products can be developed and configured to cover gray areas of liability and negligence resulting from the overlap between human and computer control. Would product tiers correspond to the “one-to-five” scale of the vehicle’s automation capability? Pay-per-use (versus “blanket” cover) could imply that short-term rather than annual renewable policies would become the norm – and lessons learned from current ride-sharing products could be employed. How will regulation affect or keep pace with the new products? Considerations for commercial lines might be significantly different when the rate of adoption is expected to increase the fastest and different technologies and enhanced safety overrides could be economical to deploy.

Challenge 4: How will we price it – and can it still be profitable?

The relative importance of different rating factors in pricing will change markedly. First, analysis of risk would depend primarily on the degree of self-driving versus manual control. For autonomous operation, pricing would be based on assessing the vehicle’s level of automation in terms of its technology, quality of implementation and anticipated types of driving. There are nuances between manufacturers even for relatively basic, standardized technologies, such as automatic emergency braking (AEB). For example, fuller automation capability may vary depending on the OEM, sensor quality and software used. How would data on the technical capability and usage statistics be collected? Could this be centralized in some way and retrieved transparently by insurers, rather than having to be disclosed?

The economics of the product will also be very different given a much reduced number of claims, and we will examine the speed of change, the resulting size of the market over time and the return on capital it might sustain compared with the present. Key questions will be to what extent this might be offset by increased overall demand for transportation, given the surge in accessibility of car transportation combined with the anticipated benefits to congestion. Could any alternative, discretionary coverages become more relevant?

Challenge 5: What influence will legislators have?

A large number of agencies are managing pilot programs, and their policies will have a major influence by encouraging or inhibiting adoption in each different country. We will give an overview of the current progress in each jurisdiction and highlight leading models that we foresee will become the templates for broader rollout.

Starting from an overview of the applicability of current insurance legislation to autonomous vehicle operation, we will review how legislation is likely to guide the cover and scope of autonomous insurance products in the future and the likely compulsory minimum cover requirements.

See also: Of Robots, Self-Driving Cars and Insurance  

Conclusion

As we have seen, autonomous vehicles will revolutionize mobility and inevitably automobile insurance. While we cannot predict the pace of these changes, we encourage insurers to prepare accordingly.

The lessons from other industries are stark. Companies content to wait and see, or worse – are oblivious to the threat until it is too late – could share the familiar fate of other household names that have been left behind by a wave of new technology.

In considering the next steps, insurers should analyze their business portfolios and strategies to understand their exposure to these changes. They should conduct what-if scenario analysis to model potential effect and evaluate what actions will be required to transform their organizations in parallel with various levels of car automation.

Early innovators are likely to generate substantial benefit for their businesses. To be successful in this space, insurers will need to aim for agile innovation and improve the way they use increasing volumes of data. They should also explore new collaborative models to shape a connected automotive ecosystem that will include insurers, auto manufacturers, technology companies and regulators.

You can find the full report from EY here.

Now Is the Time for Cyber to Take Off

Uncertainty about several key variables appears to be causing U.S. businesses and insurance companies to move cautiously into the much-heralded, though still nascent, market for cyber liability policies.

Insurers continue to be reluctant to make policies more broadly available. The big excuse: Industry officials contend there is a relative lack of historical data around cyber incidents, and they bemoan the constantly evolving nature of cyber threats.

This assessment comes in a report from the Deloitte Center for Financial Services titled: Demystifying Cyber Insurance Coverage: Clearing Obstacles in a Problematic but Promising Growth Market

“Insurers don’t have sufficient data to write coverage extensively with confidence,” says Sam Friedman, insurance research leader at Deloitte.

But the train is about to leave the station, and some of the stalwarts who shaped the insurance business into the ultra conservative (read: resistant to change) sector it has become could very well be left standing at the station.

Consider that regulations imposing tighter data handling and privacy protection requirements are coming in waves. Just peek at the New York Department of Financial Services’ newly minted cybersecurity requirements or Europe’s recently revamped General Data Protection Regulation.

With cyber threats on a steadily intensifying curve, other jurisdictions are sure to jump on the regulation bandwagon, which means the impetus to make cyber liability coverage a standard part of everyday business operations will only increase.

Meanwhile, cybersecurity entrepreneurs, backed by savvy venture capitalists, are moving aggressively to eliminate the weak excuse that there isn’t enough data available to triangulate complex cyber risks. In fact, the opposite is true.

Modern-day security systems, such as anti-virus suites, firewalls, intrusion detection systems, malware sandboxes and SIEMS, generate mountains of data about the security health of business networks. And the threat intelligence systems designed to translate this data into useful operational intelligence is getting more sophisticated all the time.

See also: Why Buy Cyber and Privacy Liability. . .  

And while large enterprises tend to have the latest and greatest of everything, in house, even small and medium-size businesses can access cutting-edge security systems through managed security services providers.

Meanwhile, big investments bets are being made in a race to be the first ones to figure out how to direct threat intelligence technologies to the task of deriving the cyber risk actuarial tables that will permit underwriters and insurers to sleep well at night. One cybersecurity vendor to watch in this arena is Tel Aviv, Israel-based InnoSec.

“Cyber insurance policies are being given out using primitive means, and there’s no differentiation between policies,” observes InnoSec CEO Ariel Evans. “It’s completely noncompetitive and solely aimed right now at the Fortune 2000. Once regulation catches up with this, cyber insurance is going to be required. This is around the corner.”

InnoSec was busy developing systems to assess the compliance status and overall network health of companies involved in merger and acquisition deals. It now has shifted to seeking ways to apply those network assessment approaches to the emerging cyber insurance market.

At the moment, according to Deloitte’s report, that market is tepid, at best. While some have predicted U.S. cyber insurance sales will double and even triple over the next few years to reach $20 billion by 2025, cyber policies currently generate only between $1.5 billion and $3 billion in annual premiums.

Those with coverage in minority

As of last October, just 29% of U.S. business had purchased cyber insurance coverage despite the rising profile of cyber risk, according to the Deloitte report. Such policies typically cover first- and third-party claims related to damages caused by a breach of personally identifiable information or some derivative, says Adam Thomas, co-author of the Deloitte report and a principal at the firm. In some cases, such policies also might cover business disruption associated with a cyber incident.

The insurance industry contends it needs more businesses to buy higher-end, standalone cyber insurance policies, until enough claims data can be collected to build reliable models, much as was done with the development of auto, life and natural disaster policies.

But businesses, in turn, aren’t buying cyber policies in enough numbers because insurers are adding restrictions to coverage and putting fairly low limits on policies to keep exposure under control. “It is a vicious cycle,” Friedman says.

“Insurers recognize that there is a growth opportunity, and they don’t want to be left out of it,” he says. “On the other hand, they don’t want to take more risk than they can swallow.”

While the insurance industry gazes at its navel, industry analysts and cybersecurity experts say the big challenge—and opportunity—is for underwriters and insurers to figure how to offer all businesses, especially small- and medium-size companies, more granular kinds of cyber policies that actually account for risk and provide value to the paying customers.

“What they’re doing now is what I call the neighbor method,” InnoSec’s Evans says. “You’re a bank, so I’ll offer you a $100 million policy for $10 million. The next guy, he’s a bank, so I’m going to offer him a $100 million policy for $10 million. It has nothing to do with risk. The only place this is done is with cyber.”

Talk in same terms

This is due, in part, to a lack of standard terminology used to describe cyber insurance-related matters, says Chip Block, vice president of Evolver, a company that provides IT services to the federal government. The SANS Institute, a well-respected cybersecurity think tank and training center, last year put out a report that drills down on the terminology conundrum, including recommendations on how to resolve it, titled Bridging the Insurance/Infosec Gap.

And the policies themselves have been another factor. “If you compare car insurance from Allstate and Geico, a majority of the policies are relatively the same,” Block says. “We haven’t gotten to that point in cyber. If you go from one underwriter to another, there is no common understanding of the terminology.”

Understandably, this has made it hard for the buyer to compare policies or to determine the relative merits of one policy over the other. Block agrees that cyber policies today generally do not differentiate based on risk profile—so a company that practices good cyber hygiene is likely to see no difference in premiums as compared with one that doesn’t.

See also: How Data Breaches Affect More Than Cyberliability  

Industry must get moving

InnoSec’s Evans argues that even though cybersecurity is complex, the technology, as well as best practices policies and procedures, are readily available to solve the baseline challenges. What is lacking is initiative on the part of the insurance industry to bring these components to bear on the emerging market.

“This is absolutely possible to do,” she says. “We understand how to do it.”

Putting technological solutions aside, there is an even more obvious path to take, Friedman argues. Resolve the terminology confusion and there is little stopping underwriters and insurers from crafting and marketing cyber policies based on meeting certain levels of network security best practices standards, Friedman says.

“You look at an organization’s ability to be secure, their ability to detect intrusions, how quickly they can react and how much they can limit their damage,” he says. “In fact, insurers should go beyond just offering a risk-transfer mechanism and be more aggressive in helping customers assess risk and their ability to manage and prevent.”

Thomas pointed to how an insurance company writing a property policy for a commercial building might send an engineering team to inspect the building and make safety recommendations. The same approach needs to be taken for cyber insurance, he says.

“The goal is to make the insured a better risk for me,” he says.

What Liabilities Do Robots Create?

The intersection of humanity and robots is being transported from the human imagination and formed into a tangible reality. Many books and movies like iRobot and Her have analyzed various potential impacts of that intersection, but the complete intersection will actually be that of humanity, robots and liability.

It is insufficient, however, to know that advanced robotics and liability will intersect. Advanced robotics is going to thrust upon insurers a world that is extremely different from the one they sought to indemnify in the 20th century. Already, drones and autonomous vehicles are forcing some parts of the insurance sector to try to determine where responsibility exists so that liability can be appropriately assigned, and those efforts will continue for at least the next decade.

The liability created by the combination of robots operating with humanity now falls on commercial, and especially professional, insurers to engineer robotic liability products to provide clients and the global economy with stability, while providing insurers a valuable stream of revenue.

There are some ground rules that must be considered before bringing robotic liability to life. First, what is the definition of a robot? For the purposes of this paper, Professor Ryan Calo’s definition of a robot will be used. According to the professor, a robot can sense, process and act on its environment. There is also the realization that currently it may be beyond human ability to create a unified robotic liability doctrine for insurance purposes. This is largely due to the environments in which robots will exist, as well as the ramifications of those environments from a legal, physical and practical standpoint. After all, drones capable of sustained flight are inherently going to exist in a different realm from ground-based autonomous vehicles, and the same is true for robots capable of sub-orbital and intra-planetary flight. Therefore, this paper is going to focus on a discrete part of robotic liability: those robots used in agricultural fields. Another reason for focusing on one area of robotics is to keep things simple while exploring this uncharted part of the insurance sector.

See also: Here Comes Robotic Process Automation

The farmer, the field and the harvest, the most commonplace of settings, provide an area where dimensions of robotic liability can be easily analyzed and understood. Plant husbandry draws on thousands of years of human knowledge, and it is already using aerial drones and big data analytics to maximize crop yields. Additionally, the agricultural arena has a high likelihood of being an area wherein robots cause significant shifts in multiple areas of the economy.

Within the next two or three years, a robot, like this paper’s fictional AARW (autonomous agriculture robotic worker), will be created and sent to the fields to begin to replace human labor when it comes time to harvest a crop. There are multiple reasons for this belief, starting with the advance of robotic technology. In 2015 the DARPA Robotics Challenge was held, and it demonstrated the deployment of an array of robots that will be the ancestors of a robot like AARW. In that competition, robots were required to walk on uneven terrain, accomplish tactile tasks and even drive a traditional vehicle. While the robots in that challenge were not largely or fully autonomous, they are the undeniable major step toward productive autonomous robots.

There are already simple machines that can perform a variety of functions, even learning a function by observing human movements, and the gap between the drawing board and reality is being quickly eroded with the tremendous amount of computer hardware and software knowledge that is produced by both private and public institutions each month.

Moreover, there are strong labor and economic incentives for the introduction of robots into the agricultural field. Robots are able to work non-stop for 12 hours, are free from any form of health and labor laws and can have life expectancies in the five- to 15-year range. Crops are, more often than not, planted in fields with straight rows and require only the robotic ability to pickup an item, like a watermelon, take it to a bin, deposit the melon in the bin and then repeat the same steps on the next watermelon. All this requires only a modest amount of know-how on the robot’s part.

If AARW is built to industrial quality standards, then it will only require a minimal amount of maintenance over the course of each year. And if AARW is powered using solar panels, then the cost of its fuel will be included in the robot’s purchase price, which means that the minor maintenance cost along with a possible storage cost will be the only operating costs of AARW. With its ability to work non-stop and with no overhead costs for complying with human health and labor laws, AARW will be a cheaper alternative to human workers, providing a strong economic incentive for farmers to use robots in the field.

An agricultural robot will, however, create unique exposures for a farmer, and those exposures will cultivate the need for robotic liability. Arguments can be made for completed operations/product liability and technology E&O exposures with AARW in the field. However, there are multiple reasons why it would be unwise to try to relegate liability for AARW to any current product.

First and foremost, there is a strong expectation among scholars and legal experts that robots are going to do unexpected things. Imagine: At harvest time, the farmer brings AARW to the field to collect the crop of watermelons. The field happens to be near a highway on which big rigs travel, and part of the field lies next to a blind corner in the highway. As AARW successfully harvests one row after another, the farmer’s attention drifts, and she begins talking with a neighbor. Suddenly, there is a screech of tires and a loud bang as a big rig slams into AARW, which, for an unknown reason, walked into the highway.

Who should bear responsibility for the untimely demise of AARW?

If AARW were a cow, then the insurer of the big rig would have to reimburse the farmer for the loss of one of her cows. In certain respects, AARW and a cow are the same in that they can sense, process and act upon their environment. However, a cow has what is often described as a mind of its own, which is why insurance companies and the law have come to place the fault of a rogue cow on the unwitting vehicle operator instead of the aggrieved farmer.

AARW, though, is not a cow. It is a machine created to harvest produce. Does the software that controls the robot’s actions equate to the free will of an animal, like a cow? The farmer who lost the cow does not demand her money back from the rancher who sold her a reckless bovine product. Why should the creator of the robot be expected to reimburse the farmer for the loss of AARW? How does it make sense for product liability to come into play when the rancher shares no blame for the indiscreet cow? Technology companies have been extremely successful at escaping liability for the execution of poorly crafted software, so the farmer is unlikely to find any remedy in bringing a claim against the provider of the software, even if it is a separate entity from the one that assembled AARW.

Regardless of where blamed is assigned, the issue would be awkward for insurers that tried to force the liability for the robot’s actions into any current insurance product. At worst, the farmer would not be made whole (technology E&O), and, at best, changing existing laws would likely only partially compensate the farmer for the loss of AARW.

See also: The Need to Educate on General Liability  

The liability waters are already murky without robotic liability. Machine learning will likely create situations that are even more unexpected than the above possibility. Imagine if AARW imitated the farmer in occasionally giving free produce samples to people passing the field. In the absence of robotic liability insurance, who should be responsible for a mistake or offending action on the robot’s part?

It would be unfortunate to place all of the blame on AARW or the farmer. The situations also call into question the quality of programming with which the robot was created. In the paper by M.C. Elise and Tim Hwang, “Praise the Machine! Punish the Human!” historical evidence makes it unwise to expect liability to be appropriately adjudicated were a farmer to sue the creator of AARW.

With an autonomous robot like AARW, it is possible to bring into consideration laws related to human juveniles. A juvenile is responsible if she decides to steal an iPad from a store, but, if she takes the family Prius for a joyride, then the parents are responsible for any damage the juvenile causes. Autonomous robots will inherently be allowed to make choices on their own, but should responsibility apply to the robot and the farmer as it does in juvenile law for a child and a parent?

From the insurer’s standpoint it makes sense to assign responsibility to the appropriate party. If AARW entered a highway, the responsibility should fall on the farmer, who should have been close enough to stop it. Giving away produce, which could be petty thievery, is wrong and, because AARW incorrectly applied an action it learned, it remains largely responsible.

To more fairly distribute blame, it may be worthwhile for robotic liability to contain two types of deductible. One would be the deductible paid when 51% of the blame were due to human negligence, and such a deductible would be treble the second deductible that would apply if 51% of the blame were due to an incorrect choice on the robot’s part. This would help to impress on the human the need to make responsible choices for the robot’s actions, while also recognizing that robots will sometimes make unexpected choices, choices that may have been largely unforeseeable to human thinking. Such assignment of responsibility should also have a high chance of withstanding judicial and underwriting scrutiny.

Another disservice to relegating robots to any existing form of liability is in the form of underwriting expertise. Currently, most insurers that offer cyber liability and technology E&O seem to possess little expertise about the intersection of risk and technology. That lack hurts insurers and their clients, who suffer time and again from inadequate coverage and unreasonable pricing. It would be advantageous to create robotic liability that would be unencumbered by such existing deficiencies. By establishing a new insurance product and entrusting it to those who do understand the intersection of humans, liability and robots, insurers will be able to satisfy the demands of those who seek to leverage robots while also establishing a reliable stream of new revenue.

A 21st century product ought to be worthy of a 21st century insurance policy.

Another aspect of exposure that needs to be considered is in how a robot is seen socially, something that professor Calo discusses in his paper “Robotics and the Lessons of Cyberlaw.” Robots are likely to be viewed as companions, or valued possessions, or perhaps even friends.

At the turn of the last century, Sony created an experimental robotic dog named Aibo. Now a number of Aibos are enjoying a second life due to the pleasure people in retirement homes experience when interacting with them. One of the original Sony engineers created his own company just to repair dysfunctional Aibos.

While that particular robot is fairly limited in its interactive abilities, it provides an example of how willing people are to consider robots as companions instead of mechanical tools with limited value. It is more than likely that people will form social bonds with robots. And, while it is one thing to be verbally annoyed at a water pump for malfunctioning and adding extra work to an already busy day, mistreatment of a robot by its employer may be seen and felt differently by co-workers of the robot. Some people already treat a program like Apple’s Siri inappropriately. People to tell Siri that it is sexy, ask what it “likes” in a romantic sense and exhibit other behaviors toward the program, even in a professional setting, that are inappropriate. While such behavior has not resulted yet in an EPL (employment practices liability) claim, such unwarranted behavior may not be tolerated.

Consequently, the additional exposures created by a robot’s social integration into human society will more than likely result in adding elements to an insurance claim that products liability, technology E&O and other current insurance products would be ill-suited to deal with.

See also: Of Robots, Self-Driving Cars and Insurance

Advanced robotics makes some of the future murky. Will humans be able to code self-awareness into robots? Are droid armies going to create more horrific battlegrounds than those created by humans in all prior centuries? Are autonomous vehicles the key to essentially eliminating human fatalities?

However useful those kinds of questions are, the answer to each, for the foreseeable future, is unknown. What we do know for sure is that the realm of advanced robotics is starting to move from the drawing board and into professional work environments, creating unexplored liability territory. Accordingly, the most efficient way to go into the future is by creating robotic liability now because, with such a product, insurers have the ability to both generate a new stream of revenue while at the same time providing a more economically stable world.

The Questions That Aren’t Being Asked

In Aldous Huxley’s 1931 novel Brave New World, many original ideas were posited about a futuristic society. Two of those ideas, appearing in our present, involve eugenics and an ever-increasing reliance on technology.

Techniques like CRISPR (clustered regularly interspaced short palindromic repeats) to genetically engineer a human embryo, and technological advances like self-driving vehicles, could be said to represent some of Huxley’s notions. However, professional liability underwriters, especially those underwriting cyber liability and tech E&O, are out of phase with this “brave new world,” and this fact creates a dangerous situation for both those underwriters and an economic world dependent on them. To be responsible and successful in the present and into the future, the professional liability insurance sector must look backward to look forward and, in so doing, create a breed of underwriters who are every bit as creative as the future will be.

Being out of sync with present-day reality is clearly represented in questions not asked on cyber liability and tech E&O applications. For instance, one current cyber liability application does not ask what type of firewall an applicant is using. A company can use a simple device with a firewall feature and claim to have a firewall in place, but that device will not come close to equaling the protection offered by a hardware-based NGFW, or Next Generation Firewall. The same application also does not ask if multiple hardware and software ecosystems are used, even though the answer to that question, especially for a medium-sized and large business, offers significant insight into the company’s cyber security approach. Additionally, this particular application does not ask whether an applicant is using the services of a cyber security firm. Those kinds of questions, and the answers to them, convey an enormous amount of information about the cyber security posture of an applicant and, in turn, provide significant insight into whether a risk is worth underwriting and at what cost. For such questions to be missing from an application is dangerous for insurance companies and the clients of those companies.

See Also: Space, Aviation Risks and Higher Education

The current situation with technology E&O applications is equally worrisome. For example, in the exclusions list on one recently updated technology E&O policy there is no exclusion for computer languages known to be highly prone to cyber breaches. Theoretically, an insured software company could be writing code in Adobe Flash or Java Script, languages that should be avoided. By not excluding those languages, the insurer is exposed to adverse results of claims and lawsuits caused by an insured using hazardous script. Perhaps even worse, this insurer does not exclude wireless products that do not include proper encryption. Thus, if a company that produces baby monitors creates a product that broadcasts the signal in an unencrypted format, claims could arise from a concerned consumer of that product. After all, what reasonable parent would allow anyone to spy on her child?

This issue is likely even worse because, time and again, successful lawsuits have already been brought against manufacturers of products that lack proper wireless encryption. The absence of such exclusions to protect itself and to encourage better behavior from its insureds calls into question whether a technology E&O insurer is in sync both with technology and the current legal environment. With underwriters being out of step in the present, one must wonder how they will be able to help drive the world forward in the future.

There are other parts of the professional insurance sphere that are not poised well to be in harmony with the future. In the near future, robots will be introduced into social environments like nursing homes. If a robot injects medication into a patient, prescribes a medication or lifts a patient from a wheelchair to a bed, then that takes an already risky situation into an unexplored legal realm. If a patient suffers an adverse reaction to a drug that was injected by a robot, then how will the nursing home be protected by any of its insurance policies? Or, what if a robot is provided by the nursing home to a patient who needs companionship? If the robot malfunctioned and could not be replaced and the patient drew into a depressed state and died, then how would insurance cover a wrongful death suit by the patient’s family? A general liability policy certainly would not cover such an event, and an allied health policy is not currently worded to handle such a risk. What about the manufacturer of that robot? Would a technology E&O policy step forward and indemnify the manufacturer of the robot?

Most countries, especially those like China, Japan and the U.S., have populations that possess far more elderly people than younger ones, and there are simply not enough people entering the field of senior care to handle the influx of those who need care in their golden years. This means that robotic companies are going to be filling that void and, in so doing, will create an unprecedented situation that will require the professional insurance sector to provide guidance and protection to the rapidly aging world. To provide that guidance and protection, however, will require professional underwriters to understand the intersection of technology, human care and the law, an intersection with which underwriters are currently less than conversant.

So how do insurance companies offering cyber liability, technology E&O and other professional insurance get into sync with the evolving world they are underwriting? There was once an international competition that encouraged students in the seventh through twelfth grades to form groups of two or three people and build educational websites. The competition was known as ThinkQuest. It was supported by both governmental and private organizations, had strong support from educators in more than thirty countries and rewarded the most successful competitors with scholarships of as much as $25,000. A similar approach must now be embraced and championed by the insurance industry. The brilliance of ThinkQuest was that it brought together young people who could appreciate and understand a multitude of ideas, numerous bodies of knowledge and people who were willing to learn and teach at the same time and who could convey their ideas both by the written word and binary. The spectrum of ideas that the groups put forth ranged from examining a social phenomenon like Harry Potter to examining how music affects people’s mental and physical health.

To be able to fully appreciate and understand nearly every cyber liability and technology E&O risk requires people who have an uncommon breadth and depth of knowledge that extends from simple areas like grammar to complex areas like quantum mechanics. When an underwriter tries to underwrite a risk like SSA (space situational awareness), to underwrite a risk in which a company produces electronic-photopic chips or to understand memory-resistant malware, that requires a degree of understanding that is clearly not being demonstrated by the majority of the current breed of underwriters. However, the degree of wide-ranging creativity needed here was what the ThinkQuest competitions were created to foster in young people. The insurance industry needs people who can draw from a wide range of knowledge, and it also needs people who can write binary code with exactitude. Insurance companies must employ cyber forensic engineers who can pinpoint where a security breach happened, how an intruder gained access to additional computers and how to remedy the situation.

Being able to work individually or in a team, being able to backtrack to the point of intrusion and being able to view the world in tangible and non-tangible ways requires more than someone who can simply write one line of code after another. Currently, insurance companies depend on other companies to investigate data breaches, but this will not work out in the long run. In the 20th century, numerous insurance companies owned law firms to litigate claims economically. The 21st century will require cyber liability insurers to employ cyber forensic engineers to investigate claims based on network breaches. Moreover, in the very near future insurers will need to create an organization that tests routers, switches, servers, smart phones, robots and other technology devices to determine how secure or how capable those devices are. As has already been argued on the PLUS Blog in November 2015, not all technology devices are created with the same expertise, and figuring out which devices are least and most secure will greatly facilitate insurers’ ability to price policies correctly. However, to find young people who can view the computer realm in multiple dimensions, and to find those who can function in a cross-disciplinary environment and approach a risk from a multitude of angles can only be successfully accomplished on a large scale through an instructional competition.

People who have a broad and deep appreciation for multiple disciplines and cyber forensic engineers are uncommon, and insurance companies are not the only ones who need such thinkers. cyber security companies, law firms, private and public educational organizations, research organizations, think tanks and governments are just a few sectors that need those type of people. This means that, as difficult as it is already to find thoughtful insurance people knowledgeable about the cyber world, the future is only going to be exponentially more troublesome.

When the 20-year-old who is going into her senior year at college thinks about the past and future, what will she strongly consider for a career? Will she remember the competitions that the insurance industry hosted that allowed her to cultivate friends from all over the world, and allowed her to gain the needed assurance in her skills as a programmer or a writer to pursue a major in computer science or history? Will she remember the competitions that helped fund her time at college, and in doing all of that proved that being a cyber liability underwriter is a fulfilling career opportunity? Or will that 20-year-old have nothing to remember where the insurance sector is concerned?

The Cyber Security Challenge is one competition that currently aims to increase the pool of cyber forensic engineers; however, it is not an international competition and focuses only on people who are capable of becoming cyber forensic engineers. Professional liability insurers need thinkers and tinkerers, and locating both on a large scale can only be accomplished through a competition like ThinkQuest. Nano-technology, advanced robotics, augmented reality and memory-resident malware are elements of a brave new world that cyber liability and tech E&O insurers are going to come face-to-face with in the short term. In three to five years, insurers are going to encounter robots where none have been before. If insurers do not create and enthusiastically support a competition like ThinkQuest, then insurers will not be acknowledged or remembered by those in college. Consequently, insurers will find themselves without a breed of underwriters who can thrive and understand the brave future. This must not be so!

Politics of Guns and Workplace Safety

The politics of guns in America are volatile, divisive and passionate, yet the risks that firearms present to organizations every day do not depend on the politics of the moment. Employers must deal with the reality of gun violence in America. A RIMS 2016 session discussed the legal aspects of what organizations can do and the practical implications of creating a firearms risk management program.

Speakers were:

  • Michael Lowry, attorney, Thorndal Armstrong Delk Balkenbush & Eisinger
  • Danielle Goodgion, director of human resources, Texas de Brazil

What Risks Do Firearms Pose?

OSHA states that an employer must provide “employment and a place of employment which are free from recognized hazards that are causing or are likely to cause death or serious physical harm to his employees.”

See Also: Active Shooter Scenarios

There are several risks to your organization, including:

  • Operations can halt in the case of a shooting. You have issues like police investigations and possibly injured employees.
  • Workers’ compensation will kick in if employees become injured.
  • General liability will be activated to cover injuries of non-employees.
  • Reputational risks are possibly the largest risks. You do not want your business associated with a violent act.

Most think that the Second Amendment bars private businesses from banning guns, but this is incorrect. The amendment applies to governments, not private homes and businesses.

Some employers react by posting signs banning all guns. This simple sign can be a recipe for disaster for several reasons:

  • Have you created a duty? If you post a sign, you have officially created a duty.
  • Why did you create this policy?
  • What are you doing to enforce this policy? Did you have a manual? Did you put up X-ray detectors? Probably not. You have to be able to prove you are enforcing the policy if you post a sign.
  • Did you train your employees to enforce this policy? If this policy is not enforced, a person might be injured by a firearm on your property.

“Bring Your Gun to Work” Laws

This is not a good idea. According to the law, business may not bar a person who is legally entitled to possess a firearm from possessing a firearm, part of a firearm, ammunition or ammunition component in a vehicle on the property.

In Kentucky, an employee may retrieve the firearm in the case of self-defense, defense of another, defense of property or as authorized by the owner, lessee or occupant of the property. In Florida, the employer has been held liable for civil damages if it takes action against an employee exercising this right.

Reputational risks also can apply. You could either get special interest groups protesting against your business or people who refuse to do business with you.

The Middle Ground

It is best to create a policy. Even if you support the right to bear arms, you can do it subtly. There are several provisions on what type of carry you allow and what signs are required. Business owners also do have the ability to allow no guns on the premises.

See Also: Broader Approach to Workplace Violence

Your policy should describe exactly how to approach a customer if an employee sees a weapon, including who should approach the customer, what to say and the steps to take to address the issue. Training is important.

Why Train?

  • Researchers from the Harvard School of Public Health and Northeastern University found the rate of mass shootings has tripled since 2011.
  • In 2014, an FBI study considered 160 events between 2000 and 2013. 70% occurred in business or educational setting.
  • In 2000-2006, the annual average rate was 6.4 shootings. That jumped to 16.4 in 2007-2014.

This is clearly a problem that is getting worse, so why is training rarely provided? Places of business are a target – especially retail, restaurants and businesses in the hospitality industry. The active shooter wants soft, easy targets in large, open, public and crowded areas, and the goal is to kill indiscriminately. If your business is doing well with large crowds, you are a soft target.

Active Shooter Resources

To learn how to manage this risk, you can find resources from:

  • Law enforcement
  • Insurance partners
  • Government
  • Outside experts
  • Legal
  • Human Resources

Online resources include: