Tag Archives: E&O

Agents’ Standard of Care for E&O Purposes

To begin on a dreary note, I feel like I am beating a dead horse discussing agencies’ standard of care. This would not even be a valid topic, except:

1. Too many attorneys are involved who cannot see the forest for the trees. They look at every situation with the idea that, if the agency had not done this or that, they would have an easy time winning the suit.

Their ability to win a suit easily should not be a factor in advising agencies to shirk their standards. Telling an agency to not advertise that they are professionals so that when they are accused of failing to provide services at a professional level they can win a case more easily is horrendous advice. Agents do not need attorneys who cannot win hard cases.

See also: Are P&C Insurers Failing Agents?  

Furthermore, advertising is not the issue. To even bring it up is evidence the attorney or other adviser is completely missing the point. The real point should be to act as a professional so that the agency can advertise as a professional. By acting as a true professional, the agency does not have to worry about using better advertising. It does not have to worry about being called out as a hypocrite for advertising one thing while doing something less.

2. A preponderance of agencies seems to want to be considered incompetent. A low standard of care is evidence of incompetence. At the very least, a low standard of care encourages amateurism.
This combination of advice from on high, attorneys and advisers, with a willing audience that WANTS TO BE TOLD to act amateurish, is a death knell for independent agencies because NO ONE NEEDS AMATEUR AGENTS!

The need for professional agents is stronger than ever. With so many new distributors of insurance, including ones that do not seem to think insurance licenses are even important, existing amateur agents are being made redundant. Some of these new distributors are going one level of dumb further, but cheaper.

Other new distributors are far cleverer because one has to read their advertisements carefully to understand that they create the impression of professionalism but not the promise of professionalism. They are using the difference between implying and inferring. They have larger budgets to hire more professional advertising experts that can craftily navigate between appearance and reality. I do not agree with their approach, but I understand it, and I expect some will be successful. This group’s success further negates the value, whatever value ever existed, of amateur agents.

The space that is left, which is largely uncontested, is the space of a true professional agency. This requires closing your ears to those advisers and attorneys who incompetently cannot understand the difference between a professional agency’s E&O exposures advertising professional services and an amateur agency’s E&O exposures created when they advertise professional-level services or images.

A true professional agency will incur far less E&O exposure because its clients are far more likely to buy the coverages they need! What is the cause of most E&O claims? The client not having the right coverage. If the agency sells clients more coverages, then the odds of a client not having the right coverage decreases. E&O is not that complex. The #1 way to avoid E&O is to sell clients the coverages they truly need, no more and no less.

Executing at a professional level is harder than the strategy, which is why this space is open. It is difficult, and, if it was easy, the space would not be available. Here are a few key points for becoming a true professional agent:

  1. Learn your coverages.
  2. Use a coverage checklist with your clients. No single better tool exists, by far, than a checklist for determining coverage applicability other than my proprietary exposure training process.
  3. Read your forms. I flat do not understand why anyone would assume what coverages exist or do not exist in a non-ISO form without reading it and without regard to how well someone knows the ISO form. If one is not selling an ISO form, then one has to read the proprietary form to know what is or is not in it. This is work. This is what you get paid to do as a pro. Amateurs take short cuts.

Why do more agency personnel not take these three basic steps? To date, they’ve learned to make a living being partially ignorant, so why start now? Please understand, I am not trying to be cynical, satirical or facetious. The fact is, based on the E&O claims I have seen and the hundreds and hundreds of interviews I’ve conducted of agency personnel, ignorance and incompetence is not an overstatement. People with 10, 15 or 20 years’ experience cannot describe basic coverages, and yet they have made a living. Hence, they have made a living while remaining ignorant.

See also: Insurtechs: 10 Super Agents, Power Brokers

I can’t argue about past success, but, going forward, I do not see how this business model has much opportunity. The new disrupter agencies can achieve the same level of amateur knowledge for much lower commissions.

If an agent knows the coverages, identifies the coverages the client actually needs, sells the client those coverages and obtains the client’s sign-offs on the coverages he or she needs but will not purchase, and then reads the forms to determine whether the coverages actually exist, the odds of a client having an exposure is quite limited. Additionally, the agency’s sales will increase, and the agency can have more fun by advertising more powerfully. I think a smart agency owner would build the entire sales strategy around identifying other agents’ mistakes, which should be like shooting fish in a barrel.

Hiding behind an attorney’s caveats is no way to go through the world, and it is not much of a business strategy. Be bold by doing what your clients truly need you to do, enjoy your success and sleep better at night.

A Really Important Role for Agents

Agents have a crucial role protecting their clients, but not just by providing the right coverages. Do not get me wrong, selling the right coverages is of paramount importance for professional agents (and I don’t know what amateur agents are even supposed to do).

Another key service professional agents can provide clients is protecting them from insurance companies. A great example is reading forms, yes–actually reading forms, to distinguish whether coverage actually exists! I think cyber might be an excellent generic example of verifying true coverage is actually being provided or if true coverage just appears to exist.

See also: 5 Predictions for Agents in 2018  

Another example, and a great way to prevent E&O claims, is careful policy checking on E&S policies. By and large, surplus lines does not have to provide the coverages promised in their proposals. Neither do they have to notify agents or insureds at renewal if they reduce coverages. This is why they include their disclaimer stating they do not have this responsibility. It is one reason this is surplus lines and not an admitted market. An insured will not know the coverages have been stripped without careful review, and, even then, they may not understand. I know far too many agents who do not understand, so I don’t know why anyone should expect the normal insured to understand. This is a job for professional agents!

A third example is provided by a recent court case. Joseph Beith provided the details in his blog (and if you care about insurance companies treating insureds fairly, I highly recommend you subscribe to his blog). A long-term care (LTC) provider included a sentence (used by at least one other carrier, too) that, “Your premiums will never increase because of your age or any changes to your health.” My bet is that 95 out of 100 insurance veterans would not recognize the problem with this “guarantee.” Beith recognized and pointed out the problem. The guarantee does not prohibit the company from raising rates on a class basis (and, as people age, their class ages).

If an agent has a choice of selling two policies, one with this tricky language and another without it, then, all else being equal, even if the policy without this language is more expensive, a professional agent will point out this crucial language issue. Insurance policies are, after all, legal contracts, so policy language matters, A LOT!

This is maybe an extreme example of arguably (and it is arguable since it is part of a large lawsuit) crafty language, but important differences exist between carriers’ policies in virtually every instance. Whether it is simply a material difference in ordinance and law limits between two homeowners policies or huge contractual liability differences between two policies, professional agents will point out the differences. Doing so is crucial to helping insureds understand that insurance IS NOT a commodity when sold by professionals (again, I don’t even know what to call insurance when sold by amateurs other than disasters waiting to happen).

Pointing out differences in coverage shows clients you are actually working to help them rather than just working to make a buck. Pointing out differences gives clients the power, and, if they have the power, your relationship will likely be much stronger over time. Conversely, when they feel screwed because they were not educated and given the opportunity to choose, they are more likely to sue you or at least tell everyone they know not to do business with you.

See also: 4 Ways to Improve Agent Experience  

A problem with LTC and life insurance is that, when the events that trigger a claim occur, the agent may be long gone. P&C policies typically have a shorter lifespan, meaning more ramifications, good and bad, for professional agents. A good professional agent who makes these distinctions with good clients can achieve considerable success. I am not sure about the future of people selling coverages they do not know and do not communicate to clients. The future for absolute professionals is, however, so bright they will need shades.

Psychology’s Relevance in Security

The best way to defeat or at least largely mitigate hackers is with a dynamic defense system. When combined effectively, anti-virus software, NGFWs and the products and services from cybersecurity companies like CyberArk and FireEye can provide an organization with a resilient cybersecurity framework. However, such security measures are expensive and are dependent on companies that employ IT professionals, which is why many organizations try to fend off cyber attacks only with anti-virus software and a NGFW. Yet there is another method with which to mitigate or prevent cyber breaches, and it is a method that cyber liability and technology E&O insurers need to understand and immediately employ: human psychology.

The most common meeting of psychology and the binary world is the door to the binary world: the password. Most, if not all, underwriters have read an article or heard a lecture about how “password” and “123456” are the most frequently used keys when people attach a password to anything. Moreover, the commonality of those two keys has been a fact for decades, but the insecurity of using commonly known passwords as a passport remains virtually immune to change.

The longevity of weak keys is due to many factors, but at the heart of all the factors is human psychology. It is a behavior that does not want to be bothered with memorizing a multitude of passwords, and one that tries to find the easiest way to meet a password requirement instead of trying to create a strong passport. Most importantly, it is risk and reward psychology that governs the creation of any password. Who cares in the professional world what a person’s password is as long as the work gets done and a person gets paid?

Yet current cyber liability and technology E&O wording does not even try to tackle this most basic insecurity, one that costs insurers large amounts of currency time and again. Insurers will continue to lose vast amounts of money due to the insecurity of a key like “123456” until insurers decide to tackle human psychology and work with technology companies to create a safe path forward out of the current mess with which the digital community finds itself.

See also: How to Identify Psychosocial Risks  

If passwords were the only element of enterprise cybersecurity that needed to be reformed, then, to a high degree, the issue would not have far-reaching implications. However, the fact is that the weakness of keys is only a symptom of a larger problem.

Cybersecurity may be a topic that crops up in news headlines on a regular basis, but it is a topic that also is generally viewed as a fringe area of thought. At the enterprise level, this can be seen in one prominent way beyond dysfunctional passports, and that is in individual cybersecurity responsibility. Cyber breaches have cost the global economy no less than $400 billion each year since 2013, have affected essentially every part of the professional sphere, and are bringing governments around the world into conflict with their taxpayers as represented, in one way, when a government, like the U.S. government, tried to force Apple to make its products less secure.

Nonetheless, to this day a majority of the companies around the world do not put part of the onus on individual employees for a company’s cybersecurity posture. Most companies do not include, in annual employee reviews, an area that deals with how the individual contributed to the strength or weakness of the company’s cybersecurity approach.

Did the employee use a strong password over the past year? Did the employee lock her computer each time she stepped away from her desk? Was the employee’s company computer linked to any cyber attacks? If the employee’s computer was linked to a cyber attack, then had the employee shown an appreciable improvement of her cybersecurity awareness?

By not enforcing the need for every employee to contribute to the cyber safety of the company, employees at all levels are allowed to have a carefree outlook, which is clearly detrimental to the cybersecurity posture of every organization. Even potential employees are not vetted for their sense of healthy cybersecurity. Companies ask numerous questions when interviewing a potential candidate, but very few companies try to assess the individual’s sense of responsibility when it comes to cybersecurity. If employees, and even applicants, are not expected to carry part of the responsibility, then what reason does any employee have to be responsible from a cybersecurity standpoint?

Perhaps more disturbing than the previous issues is that cyber liability and technology E&O insurers do not account for how human behavior influences the development of computer hardware and software. From about 1990 to the present, there has been a relentless movement by technology companies to get products to market at breakneck speed.

While a hardware company like Intel has produced some products of dubious quality, like trying to push its Pentium III processor beyond the 1Ghz level and the Rambus fiasco, hardware producers have largely avoided major mistakes. However, software developers are almost entirely responsible for the creation of a binary world where security has almost always been an afterthought, and human psychology is at the heart of this issue as well.

Since 1990, constant pressure has been placed on software engineers to meet deadlines set by a management system that is focused on everything but cybersecurity, which means that quality is almost always sacrificed to include a flashy software feature or simply to get a product to market quickly. Windows Me, Windows Vista, and Windows 8 are the results of a management system that showed great disregard for the safety of the end user.

Moreover, software engineers themselves also have the psychological outlook that, if an issue does comes up after a piece of software is released, it can always be patched at a later date. Perhaps the most obvious example of the patching system in overdrive is that of smartphone operating systems and applications. It is not uncommon for one smartphone application to receive updates two or three times each month. However, the present wording of technology E&O policies and the questions asked in technology E&O applications continues to demonstrate a severe lack of understanding on the part of insurers as to how human behavior gives rise to technology E&O claims.

When it comes to human psychology, it seems that the most egregious lack of understanding by insurers is not comprehending their most prominent adversary: hackers. However, hackers are not all the same, which means that they are driven by different attitudes, thought processes and rewards. More than that, hacking is an art and, just like any other art, there are “newbies,” and there are actual artisans.

In the first of the four hacker tiers are elementary hackers, meaning those people under the age of 14. For the most part, elementary hackers are going to focus on their local geographical community. This is partly due to the experimenting nature of such a young hacker, because a 10- or 12-year-old is still trying to figure out how to hack. Therefore, locally geographical targets present the best chances to hone a person’s skills. After all, the basic educational system, especially in the U/S., but elsewhere, too, spends very little on defensive technologies of any kind.

The local courthouse and sheriff’s office spend only slightly more than the educational system, and local merchants still largely maintain the attitude that they somehow do not appear on the radar of any hacker. Therefore, local venues often are the best targets because they often have the least security, in all forms, and consequently are the easiest ones on which to test a person’s skills.

However, insurers largely ignore this first tier and appear to have the mindset that these hackers are unworthy of recognition and that no solution as to how to engage with this group is needed.

The next tier contains the rookie hackers. These are the hackers who successfully “graduated,” unopposed, from the elementary group and who are generally 14 to 22 years old. For this next tier, the motivation is still whether the individual is capable of a hack, but now the target of the hack is going to extend, with ever greater frequency, beyond the immediate geographical location. It will also increasingly encompass working with and learning from others.

This is often the stage where hacktivists are going to begin to form and where the psychology of the hack is going to extend to obtaining items like currency and prestige. As hackers in this group encounter other hackers, they often start to form a set of ethics that make sense, but that are hard for a majority of people to understand. This same group is also going to start to attack national law enforcement institutions, yet even this tier is largely ignored by insurers around the world even though attacks from this group often involve PII, PHI, and payment card data.

Tier three is the first tier that has widespread acknowledgment from all insurers, and this tier encompasses both artisan and professional hackers. The hackers in this tier are often going to be 23 years old and older. One factor that makes this tier of hackers so effective in entering systems where they are not welcome is that they have been able to hone their skills from the age of 10 to 23.

Most people who build and hone a skill set over the course of 13 years will be fairly capable. Another factor is that this tier is composed of people who have a sense of identity, which means that this group has formed its own moral compass and conforms to ethics and outlooks that often fall outside of the global mainstream. This sense of identity and associated ethics gives rise to groups like the FireEye branded FIN6 group, or the hacktivist group Anonymous.

A group like FIN6 is capable of inflicting hundreds of millions of dollars in damage on the global economy, but, because cyber liability and technology E&O insurers have ignored the first two tiers of
hackers, they are unable to appreciate the depth and abilities of tier three hackers.

The fourth tier of hackers have been known to insurers for years now ,as well as law enforcement organizations around the world. This tier is also composed of hackers who work for effective cybercrime groups, like FIN6, or larger cybercrime groups, hackers who are ardent supporters of a sociological or political philosophy (hackers for ISIS are a current example of this) and hackers who work for nation-states, whether directly employed or occasionally contracted to work.

These hackers have narrow views of the world, their ethics often fall outside of the norm of most hackers, and they are constantly trying to expand ways by which to wage cyber warfare (Stutnex is a recent successful example) and are the embodiment of ghosts in the network. Tier four hackers are almost always the hackers who cause the most damage while leaving virtually no trace of their activities, and they are beyond insurers’ ability to engage with in any reformative manner.

Human behavior is at the core of every single data breach initiated by a human. In perhaps the most recent egregious example, the hacking of Equifax is a foul example of this. The Equifax hack occurred because of a psychological company mindset of complacency as well as the hackers’ own psychological reasons. Complacency is clearly demonstrated in the cybersecurity posture that the company was maintaining: It can be done later.

The hole that allowed the hackers to gain access and successfully acquire copious amounts of non-public data had a fix that was released in March 2017, but by May 2017 Equifax still had not patched the vulnerability. There is also evidence that Equifax was notified as early as December 2016 that its systems were not secure.

With the PII that a credit rating agency has, such a delay in updating critical data is unacceptable. However, with no government or market pressure to behave responsibly, Equifax and its ilk will continue to suffer data breaches time and again, and time and
again consumers, and ironically insurers, will continue to exist in a world of ever-increasing uncertainty as to which direction financial harm will arrive from.

See also: The Costs of Inaction on Encryption  

While the undeniable importance of accounting for human psychology is a severe oversight on the part of insurers, the path forward is equally undeniable: Engage with as many tier one and tier two hackers as possible and ensure that cyber liability and technology E&O applications allow insurers to assess the psychological outlook an applicant has with regard to cybersecurity.

In the April 2016 edition of the PLUS Journal, it was argued that insurers need to work with other companies involved in technology, marketing and lending and in other parts of the private sector to create an international competition. This competition would give students a creative outlet to display their skills whether they be in coding, design or writing. By establishing such a competition and working with educators, world wide insurers and other companies can give potential tier one and two hackers a creative outlet for their skills as well as an affirmation that their skills can lead to healthy career paths.

By finding these individuals through an international competition, not only can insurers reduce the risk to their insureds of being hacked by the reduction in numbers of hackers, but they can also find the people who are capable of creating next-generation products.

Without spending the needed effort, though, insurers will continue to lose money at unsustainable levels to cyber liability and technology E&O claims, claims that could have been avoided by investing in adolescents, who, after all, are the future, but who also are the most vulnerable to negative influences.

By also asking the right questions in a cyber liability and technology E&O application, insurers can assess the psychological outlook of a corporate applicant and make a far more informed decision as to whether to underwrite the risk. Had insurers asked Equifax questions that appropriately gauged its perception of the importance of cybersecurity, they could have avoided the risk of underwriting the firm.

Surely, asking eight psychological questions to save $100 million is better than accepting $300,000 in insurance premium and all the uncertainty attached to that premium.

Over the past four thousand years, battles and wars have often been won by the continued incorporation into the battlefield of new technology, whether the technology was metallurgical or
mechanical, but understanding the psychological mindset of the enemy has also been a determining factor. The ever-present value of human behavior has not been lost on most of the private sector, either. Psychology is at the core of a multibillion-dollar industry like advertising, and it is represented daily in the greed and fear index on Wall Street. Understanding the psychological mindset of a company as it concerns its cybersecurity posture and understanding hackers without question must be embraced by insurers.

However, until insurers realize the virtual relevancy of human psychology they, and their insureds, will continue to lose substantial amounts of currency, time and sense of security, and the stability of the global economy will continue to be destabilized.

What Liabilities Do Robots Create?

The intersection of humanity and robots is being transported from the human imagination and formed into a tangible reality. Many books and movies like iRobot and Her have analyzed various potential impacts of that intersection, but the complete intersection will actually be that of humanity, robots and liability.

It is insufficient, however, to know that advanced robotics and liability will intersect. Advanced robotics is going to thrust upon insurers a world that is extremely different from the one they sought to indemnify in the 20th century. Already, drones and autonomous vehicles are forcing some parts of the insurance sector to try to determine where responsibility exists so that liability can be appropriately assigned, and those efforts will continue for at least the next decade.

The liability created by the combination of robots operating with humanity now falls on commercial, and especially professional, insurers to engineer robotic liability products to provide clients and the global economy with stability, while providing insurers a valuable stream of revenue.

There are some ground rules that must be considered before bringing robotic liability to life. First, what is the definition of a robot? For the purposes of this paper, Professor Ryan Calo’s definition of a robot will be used. According to the professor, a robot can sense, process and act on its environment. There is also the realization that currently it may be beyond human ability to create a unified robotic liability doctrine for insurance purposes. This is largely due to the environments in which robots will exist, as well as the ramifications of those environments from a legal, physical and practical standpoint. After all, drones capable of sustained flight are inherently going to exist in a different realm from ground-based autonomous vehicles, and the same is true for robots capable of sub-orbital and intra-planetary flight. Therefore, this paper is going to focus on a discrete part of robotic liability: those robots used in agricultural fields. Another reason for focusing on one area of robotics is to keep things simple while exploring this uncharted part of the insurance sector.

See also: Here Comes Robotic Process Automation

The farmer, the field and the harvest, the most commonplace of settings, provide an area where dimensions of robotic liability can be easily analyzed and understood. Plant husbandry draws on thousands of years of human knowledge, and it is already using aerial drones and big data analytics to maximize crop yields. Additionally, the agricultural arena has a high likelihood of being an area wherein robots cause significant shifts in multiple areas of the economy.

Within the next two or three years, a robot, like this paper’s fictional AARW (autonomous agriculture robotic worker), will be created and sent to the fields to begin to replace human labor when it comes time to harvest a crop. There are multiple reasons for this belief, starting with the advance of robotic technology. In 2015 the DARPA Robotics Challenge was held, and it demonstrated the deployment of an array of robots that will be the ancestors of a robot like AARW. In that competition, robots were required to walk on uneven terrain, accomplish tactile tasks and even drive a traditional vehicle. While the robots in that challenge were not largely or fully autonomous, they are the undeniable major step toward productive autonomous robots.

There are already simple machines that can perform a variety of functions, even learning a function by observing human movements, and the gap between the drawing board and reality is being quickly eroded with the tremendous amount of computer hardware and software knowledge that is produced by both private and public institutions each month.

Moreover, there are strong labor and economic incentives for the introduction of robots into the agricultural field. Robots are able to work non-stop for 12 hours, are free from any form of health and labor laws and can have life expectancies in the five- to 15-year range. Crops are, more often than not, planted in fields with straight rows and require only the robotic ability to pickup an item, like a watermelon, take it to a bin, deposit the melon in the bin and then repeat the same steps on the next watermelon. All this requires only a modest amount of know-how on the robot’s part.

If AARW is built to industrial quality standards, then it will only require a minimal amount of maintenance over the course of each year. And if AARW is powered using solar panels, then the cost of its fuel will be included in the robot’s purchase price, which means that the minor maintenance cost along with a possible storage cost will be the only operating costs of AARW. With its ability to work non-stop and with no overhead costs for complying with human health and labor laws, AARW will be a cheaper alternative to human workers, providing a strong economic incentive for farmers to use robots in the field.

An agricultural robot will, however, create unique exposures for a farmer, and those exposures will cultivate the need for robotic liability. Arguments can be made for completed operations/product liability and technology E&O exposures with AARW in the field. However, there are multiple reasons why it would be unwise to try to relegate liability for AARW to any current product.

First and foremost, there is a strong expectation among scholars and legal experts that robots are going to do unexpected things. Imagine: At harvest time, the farmer brings AARW to the field to collect the crop of watermelons. The field happens to be near a highway on which big rigs travel, and part of the field lies next to a blind corner in the highway. As AARW successfully harvests one row after another, the farmer’s attention drifts, and she begins talking with a neighbor. Suddenly, there is a screech of tires and a loud bang as a big rig slams into AARW, which, for an unknown reason, walked into the highway.

Who should bear responsibility for the untimely demise of AARW?

If AARW were a cow, then the insurer of the big rig would have to reimburse the farmer for the loss of one of her cows. In certain respects, AARW and a cow are the same in that they can sense, process and act upon their environment. However, a cow has what is often described as a mind of its own, which is why insurance companies and the law have come to place the fault of a rogue cow on the unwitting vehicle operator instead of the aggrieved farmer.

AARW, though, is not a cow. It is a machine created to harvest produce. Does the software that controls the robot’s actions equate to the free will of an animal, like a cow? The farmer who lost the cow does not demand her money back from the rancher who sold her a reckless bovine product. Why should the creator of the robot be expected to reimburse the farmer for the loss of AARW? How does it make sense for product liability to come into play when the rancher shares no blame for the indiscreet cow? Technology companies have been extremely successful at escaping liability for the execution of poorly crafted software, so the farmer is unlikely to find any remedy in bringing a claim against the provider of the software, even if it is a separate entity from the one that assembled AARW.

Regardless of where blamed is assigned, the issue would be awkward for insurers that tried to force the liability for the robot’s actions into any current insurance product. At worst, the farmer would not be made whole (technology E&O), and, at best, changing existing laws would likely only partially compensate the farmer for the loss of AARW.

See also: The Need to Educate on General Liability  

The liability waters are already murky without robotic liability. Machine learning will likely create situations that are even more unexpected than the above possibility. Imagine if AARW imitated the farmer in occasionally giving free produce samples to people passing the field. In the absence of robotic liability insurance, who should be responsible for a mistake or offending action on the robot’s part?

It would be unfortunate to place all of the blame on AARW or the farmer. The situations also call into question the quality of programming with which the robot was created. In the paper by M.C. Elise and Tim Hwang, “Praise the Machine! Punish the Human!” historical evidence makes it unwise to expect liability to be appropriately adjudicated were a farmer to sue the creator of AARW.

With an autonomous robot like AARW, it is possible to bring into consideration laws related to human juveniles. A juvenile is responsible if she decides to steal an iPad from a store, but, if she takes the family Prius for a joyride, then the parents are responsible for any damage the juvenile causes. Autonomous robots will inherently be allowed to make choices on their own, but should responsibility apply to the robot and the farmer as it does in juvenile law for a child and a parent?

From the insurer’s standpoint it makes sense to assign responsibility to the appropriate party. If AARW entered a highway, the responsibility should fall on the farmer, who should have been close enough to stop it. Giving away produce, which could be petty thievery, is wrong and, because AARW incorrectly applied an action it learned, it remains largely responsible.

To more fairly distribute blame, it may be worthwhile for robotic liability to contain two types of deductible. One would be the deductible paid when 51% of the blame were due to human negligence, and such a deductible would be treble the second deductible that would apply if 51% of the blame were due to an incorrect choice on the robot’s part. This would help to impress on the human the need to make responsible choices for the robot’s actions, while also recognizing that robots will sometimes make unexpected choices, choices that may have been largely unforeseeable to human thinking. Such assignment of responsibility should also have a high chance of withstanding judicial and underwriting scrutiny.

Another disservice to relegating robots to any existing form of liability is in the form of underwriting expertise. Currently, most insurers that offer cyber liability and technology E&O seem to possess little expertise about the intersection of risk and technology. That lack hurts insurers and their clients, who suffer time and again from inadequate coverage and unreasonable pricing. It would be advantageous to create robotic liability that would be unencumbered by such existing deficiencies. By establishing a new insurance product and entrusting it to those who do understand the intersection of humans, liability and robots, insurers will be able to satisfy the demands of those who seek to leverage robots while also establishing a reliable stream of new revenue.

A 21st century product ought to be worthy of a 21st century insurance policy.

Another aspect of exposure that needs to be considered is in how a robot is seen socially, something that professor Calo discusses in his paper “Robotics and the Lessons of Cyberlaw.” Robots are likely to be viewed as companions, or valued possessions, or perhaps even friends.

At the turn of the last century, Sony created an experimental robotic dog named Aibo. Now a number of Aibos are enjoying a second life due to the pleasure people in retirement homes experience when interacting with them. One of the original Sony engineers created his own company just to repair dysfunctional Aibos.

While that particular robot is fairly limited in its interactive abilities, it provides an example of how willing people are to consider robots as companions instead of mechanical tools with limited value. It is more than likely that people will form social bonds with robots. And, while it is one thing to be verbally annoyed at a water pump for malfunctioning and adding extra work to an already busy day, mistreatment of a robot by its employer may be seen and felt differently by co-workers of the robot. Some people already treat a program like Apple’s Siri inappropriately. People to tell Siri that it is sexy, ask what it “likes” in a romantic sense and exhibit other behaviors toward the program, even in a professional setting, that are inappropriate. While such behavior has not resulted yet in an EPL (employment practices liability) claim, such unwarranted behavior may not be tolerated.

Consequently, the additional exposures created by a robot’s social integration into human society will more than likely result in adding elements to an insurance claim that products liability, technology E&O and other current insurance products would be ill-suited to deal with.

See also: Of Robots, Self-Driving Cars and Insurance

Advanced robotics makes some of the future murky. Will humans be able to code self-awareness into robots? Are droid armies going to create more horrific battlegrounds than those created by humans in all prior centuries? Are autonomous vehicles the key to essentially eliminating human fatalities?

However useful those kinds of questions are, the answer to each, for the foreseeable future, is unknown. What we do know for sure is that the realm of advanced robotics is starting to move from the drawing board and into professional work environments, creating unexplored liability territory. Accordingly, the most efficient way to go into the future is by creating robotic liability now because, with such a product, insurers have the ability to both generate a new stream of revenue while at the same time providing a more economically stable world.

The Need to Educate on General Liability

In a perfect world, insurance buyers would understand their products just as well their insurance agents. This would save a few headaches for everyone involved, and it would probably streamline the process on all ends. However, the reality is that most business owners don’t understand the extent of the insurance products they purchase. Then again, no one should expect them to.

Insurance products are highly complex vehicles. Few business owners have the time to invest in becoming experts in the field or in the products they purchase. Even the best insurance agents spend years learning about the products they sell, many of which change frequently as the economy changes.

That being said, no business owner should simply buy a product without understanding the most important aspects regarding what it does and does not cover. In truth, a highly skilled insurance agent should never let them, either. Here’s where there can be a gap between how much insurance a business purchases and how much it actually needs, showing why educating business owners on the extent of their insurance really matters.

False Perceptions of General Liability Are Common

Many customers tend to believe their insurance covers more than it actually does. This situation could probably be applied to any insurance product, but general liability policies are often the most frequently misunderstood by buyers.

See also: What to Expect on Management Liability  

To put it simply, far too many businesses are purchasing less insurance coverage than they should. In a sense, many are taking a huge gamble, believing their risk exposure is less than what it actually is or that their preventative measures, such as employee training, can shield them from those risks. While risk prevention definitely helps, it’s ultimately far from the bulletproof shield many companies think it is. Most companies do it to help themselves get a better rate on their insurance, while maintaining the false perception that their general liability coverage protects them against a multitude of risks not actually defined in the policy.

As a company scales in size, so, too, does its likelihood of experiencing losses related to cyber liability, employee fraud, fiduciary liability, directors and officers (D&O) or workplace violence. Yet many companies seem not to realize their exposure.

This would, of course, be less troubling if companies were purchasing policies that actually covered those kind of risks. Overwhelmingly, they’re choosing to avoid those insurance products altogether. According to Chubb’s survey on private company risk, non-purchasers believed their general liability policy covered:

  • Directors and Officers Liability (65%)
  • Employment Practices Liability (60%)
  • Errors & Omissions Liability (52%)
  • Fiduciary Liability (51%)
  • Cyber Liability (39%)

Businesses aren’t failing to purchase enough liability coverage because they’re unnecessary risk takers. Most, it seems, simply have false perceptions about what their general liability will and won’t do.

A small business may think its general liability policy covers a server hack. Yet, lo and behold, when a server gets hacked and the ensuing liability claims start pouring in, that small business may quickly find itself underwater. In fact, the U.S National Cyber Security Alliance found that the 60% of small companies went out of business within six months of a cyber attack. This seems extreme, but the average cost for a small business to clean up after a hack is $690,000, according to the Ponemon Institute. How many small- or medium-sized businesses can easily absorb that kind of cost without insurance coverage? Not many.

Similarly, mid-sized companies may believe their general liability policy covers directors and officers, leaving the company with unnecessary risk exposures should an incident occur. If, for example, a company begins operating internationally and fails to effectively meet one of the federal regulations governing its industry, a general liability policy won’t help protect the company from impending lawsuits. Any directors held personally responsible may find their own personal assets at risk. Given what we learned from the Chubb survey, it’s quite likely that most directors may think they’re fine with the minimal coverage they receive from a general liability policy. A costly mistake, to be sure.

Who’s to Blame?

We’ll leave the finger pointing aside for now and settle on this: The customer is always right, but he’s not always well-informed. As every insurance agent knows, the amount of time it takes to fully understand an insurance product can be extensive. Business owners, in general, lack the time to invest in fully understanding the products they purchase. It should come as no surprise, then, that misunderstandings arise over what general liability policies actually cover and what risks they simply won’t mitigate.

See also: ISO Form Changes Commercial General Liability  

Insurance agents have a responsibility to use their knowledge to help business owners better understand and sift through those misconceptions. More needs to be done to help decision-makers understand what they are and are not getting from their insurance.

Helping businesses better understand the ins and outs of their general liability policy is a win-win all around.