Tag Archives: Jesse Lyon

What Liabilities Do Robots Create?

The intersection of humanity and robots is being transported from the human imagination and formed into a tangible reality. Many books and movies like iRobot and Her have analyzed various potential impacts of that intersection, but the complete intersection will actually be that of humanity, robots and liability.

It is insufficient, however, to know that advanced robotics and liability will intersect. Advanced robotics is going to thrust upon insurers a world that is extremely different from the one they sought to indemnify in the 20th century. Already, drones and autonomous vehicles are forcing some parts of the insurance sector to try to determine where responsibility exists so that liability can be appropriately assigned, and those efforts will continue for at least the next decade.

The liability created by the combination of robots operating with humanity now falls on commercial, and especially professional, insurers to engineer robotic liability products to provide clients and the global economy with stability, while providing insurers a valuable stream of revenue.

There are some ground rules that must be considered before bringing robotic liability to life. First, what is the definition of a robot? For the purposes of this paper, Professor Ryan Calo’s definition of a robot will be used. According to the professor, a robot can sense, process and act on its environment. There is also the realization that currently it may be beyond human ability to create a unified robotic liability doctrine for insurance purposes. This is largely due to the environments in which robots will exist, as well as the ramifications of those environments from a legal, physical and practical standpoint. After all, drones capable of sustained flight are inherently going to exist in a different realm from ground-based autonomous vehicles, and the same is true for robots capable of sub-orbital and intra-planetary flight. Therefore, this paper is going to focus on a discrete part of robotic liability: those robots used in agricultural fields. Another reason for focusing on one area of robotics is to keep things simple while exploring this uncharted part of the insurance sector.

See also: Here Comes Robotic Process Automation

The farmer, the field and the harvest, the most commonplace of settings, provide an area where dimensions of robotic liability can be easily analyzed and understood. Plant husbandry draws on thousands of years of human knowledge, and it is already using aerial drones and big data analytics to maximize crop yields. Additionally, the agricultural arena has a high likelihood of being an area wherein robots cause significant shifts in multiple areas of the economy.

Within the next two or three years, a robot, like this paper’s fictional AARW (autonomous agriculture robotic worker), will be created and sent to the fields to begin to replace human labor when it comes time to harvest a crop. There are multiple reasons for this belief, starting with the advance of robotic technology. In 2015 the DARPA Robotics Challenge was held, and it demonstrated the deployment of an array of robots that will be the ancestors of a robot like AARW. In that competition, robots were required to walk on uneven terrain, accomplish tactile tasks and even drive a traditional vehicle. While the robots in that challenge were not largely or fully autonomous, they are the undeniable major step toward productive autonomous robots.

There are already simple machines that can perform a variety of functions, even learning a function by observing human movements, and the gap between the drawing board and reality is being quickly eroded with the tremendous amount of computer hardware and software knowledge that is produced by both private and public institutions each month.

Moreover, there are strong labor and economic incentives for the introduction of robots into the agricultural field. Robots are able to work non-stop for 12 hours, are free from any form of health and labor laws and can have life expectancies in the five- to 15-year range. Crops are, more often than not, planted in fields with straight rows and require only the robotic ability to pickup an item, like a watermelon, take it to a bin, deposit the melon in the bin and then repeat the same steps on the next watermelon. All this requires only a modest amount of know-how on the robot’s part.

If AARW is built to industrial quality standards, then it will only require a minimal amount of maintenance over the course of each year. And if AARW is powered using solar panels, then the cost of its fuel will be included in the robot’s purchase price, which means that the minor maintenance cost along with a possible storage cost will be the only operating costs of AARW. With its ability to work non-stop and with no overhead costs for complying with human health and labor laws, AARW will be a cheaper alternative to human workers, providing a strong economic incentive for farmers to use robots in the field.

An agricultural robot will, however, create unique exposures for a farmer, and those exposures will cultivate the need for robotic liability. Arguments can be made for completed operations/product liability and technology E&O exposures with AARW in the field. However, there are multiple reasons why it would be unwise to try to relegate liability for AARW to any current product.

First and foremost, there is a strong expectation among scholars and legal experts that robots are going to do unexpected things. Imagine: At harvest time, the farmer brings AARW to the field to collect the crop of watermelons. The field happens to be near a highway on which big rigs travel, and part of the field lies next to a blind corner in the highway. As AARW successfully harvests one row after another, the farmer’s attention drifts, and she begins talking with a neighbor. Suddenly, there is a screech of tires and a loud bang as a big rig slams into AARW, which, for an unknown reason, walked into the highway.

Who should bear responsibility for the untimely demise of AARW?

If AARW were a cow, then the insurer of the big rig would have to reimburse the farmer for the loss of one of her cows. In certain respects, AARW and a cow are the same in that they can sense, process and act upon their environment. However, a cow has what is often described as a mind of its own, which is why insurance companies and the law have come to place the fault of a rogue cow on the unwitting vehicle operator instead of the aggrieved farmer.

AARW, though, is not a cow. It is a machine created to harvest produce. Does the software that controls the robot’s actions equate to the free will of an animal, like a cow? The farmer who lost the cow does not demand her money back from the rancher who sold her a reckless bovine product. Why should the creator of the robot be expected to reimburse the farmer for the loss of AARW? How does it make sense for product liability to come into play when the rancher shares no blame for the indiscreet cow? Technology companies have been extremely successful at escaping liability for the execution of poorly crafted software, so the farmer is unlikely to find any remedy in bringing a claim against the provider of the software, even if it is a separate entity from the one that assembled AARW.

Regardless of where blamed is assigned, the issue would be awkward for insurers that tried to force the liability for the robot’s actions into any current insurance product. At worst, the farmer would not be made whole (technology E&O), and, at best, changing existing laws would likely only partially compensate the farmer for the loss of AARW.

See also: The Need to Educate on General Liability  

The liability waters are already murky without robotic liability. Machine learning will likely create situations that are even more unexpected than the above possibility. Imagine if AARW imitated the farmer in occasionally giving free produce samples to people passing the field. In the absence of robotic liability insurance, who should be responsible for a mistake or offending action on the robot’s part?

It would be unfortunate to place all of the blame on AARW or the farmer. The situations also call into question the quality of programming with which the robot was created. In the paper by M.C. Elise and Tim Hwang, “Praise the Machine! Punish the Human!” historical evidence makes it unwise to expect liability to be appropriately adjudicated were a farmer to sue the creator of AARW.

With an autonomous robot like AARW, it is possible to bring into consideration laws related to human juveniles. A juvenile is responsible if she decides to steal an iPad from a store, but, if she takes the family Prius for a joyride, then the parents are responsible for any damage the juvenile causes. Autonomous robots will inherently be allowed to make choices on their own, but should responsibility apply to the robot and the farmer as it does in juvenile law for a child and a parent?

From the insurer’s standpoint it makes sense to assign responsibility to the appropriate party. If AARW entered a highway, the responsibility should fall on the farmer, who should have been close enough to stop it. Giving away produce, which could be petty thievery, is wrong and, because AARW incorrectly applied an action it learned, it remains largely responsible.

To more fairly distribute blame, it may be worthwhile for robotic liability to contain two types of deductible. One would be the deductible paid when 51% of the blame were due to human negligence, and such a deductible would be treble the second deductible that would apply if 51% of the blame were due to an incorrect choice on the robot’s part. This would help to impress on the human the need to make responsible choices for the robot’s actions, while also recognizing that robots will sometimes make unexpected choices, choices that may have been largely unforeseeable to human thinking. Such assignment of responsibility should also have a high chance of withstanding judicial and underwriting scrutiny.

Another disservice to relegating robots to any existing form of liability is in the form of underwriting expertise. Currently, most insurers that offer cyber liability and technology E&O seem to possess little expertise about the intersection of risk and technology. That lack hurts insurers and their clients, who suffer time and again from inadequate coverage and unreasonable pricing. It would be advantageous to create robotic liability that would be unencumbered by such existing deficiencies. By establishing a new insurance product and entrusting it to those who do understand the intersection of humans, liability and robots, insurers will be able to satisfy the demands of those who seek to leverage robots while also establishing a reliable stream of new revenue.

A 21st century product ought to be worthy of a 21st century insurance policy.

Another aspect of exposure that needs to be considered is in how a robot is seen socially, something that professor Calo discusses in his paper “Robotics and the Lessons of Cyberlaw.” Robots are likely to be viewed as companions, or valued possessions, or perhaps even friends.

At the turn of the last century, Sony created an experimental robotic dog named Aibo. Now a number of Aibos are enjoying a second life due to the pleasure people in retirement homes experience when interacting with them. One of the original Sony engineers created his own company just to repair dysfunctional Aibos.

While that particular robot is fairly limited in its interactive abilities, it provides an example of how willing people are to consider robots as companions instead of mechanical tools with limited value. It is more than likely that people will form social bonds with robots. And, while it is one thing to be verbally annoyed at a water pump for malfunctioning and adding extra work to an already busy day, mistreatment of a robot by its employer may be seen and felt differently by co-workers of the robot. Some people already treat a program like Apple’s Siri inappropriately. People to tell Siri that it is sexy, ask what it “likes” in a romantic sense and exhibit other behaviors toward the program, even in a professional setting, that are inappropriate. While such behavior has not resulted yet in an EPL (employment practices liability) claim, such unwarranted behavior may not be tolerated.

Consequently, the additional exposures created by a robot’s social integration into human society will more than likely result in adding elements to an insurance claim that products liability, technology E&O and other current insurance products would be ill-suited to deal with.

See also: Of Robots, Self-Driving Cars and Insurance

Advanced robotics makes some of the future murky. Will humans be able to code self-awareness into robots? Are droid armies going to create more horrific battlegrounds than those created by humans in all prior centuries? Are autonomous vehicles the key to essentially eliminating human fatalities?

However useful those kinds of questions are, the answer to each, for the foreseeable future, is unknown. What we do know for sure is that the realm of advanced robotics is starting to move from the drawing board and into professional work environments, creating unexplored liability territory. Accordingly, the most efficient way to go into the future is by creating robotic liability now because, with such a product, insurers have the ability to both generate a new stream of revenue while at the same time providing a more economically stable world.

The Costs of Inaction on Encryption

Alarm systems have a long and varied history — from geese in ancient Rome, to noise makers that announced the presence of an intruder, to present-day electronic sensors and lasers. Originally, the creation of alarms was driven by the psychological need all humans have to establish a safe environment for themselves. Today, that same need exists, but it has been extended to include other concerns, such as valued personal possessions, merchandise and intellectual property. In the cyber realm, security is as important as it is in the physical world because people must be able to feel secure in their ability to store sensitive, high-value data. Without that sense of security, the cyber realm would lose almost all of its relevance.

Cybersecurity is established by various hardware and software components, but none of the components are more essential than strong encryption. It is such encryption that keeps bank transactions, online purchases and email accounts safe. However, there is a disturbing worldwide governmental trend to weaken encryption, which was exemplified in the legal disagreement earlier this year between Apple and the U.S. government. While there are definite aspects of the dispute that fall outside of the professional insurance sphere, there is an undeniable part of the battle for strong encryption that the professional insurance sector must not fail to acknowledge and address. The outcome of this struggle will be felt well into the 22nd century, and, perhaps, at least in the business arena, the outcome will be borne most keenly by cyber liability and technology E&O insurers.

With global attempts to reduce the effectiveness of encryption, no insurer can claim it lacks a part in the effort for resilient and ever-evolving encryption and cybersecurity measures. The Chinese government is not a supporter of privacy, and it has even hacked Google’s Gmail service and the Dalai Lama’s email account to gain access to information it has deemed disruptive. It also has been stepping up its “investigations” into products produced by U.S-based technology companies. Furthermore, after both the 2015 attack in Paris and the 2016 attack in Brussels, the debate regarding whether encryption should be allowed was re-ignited in Europe and the U.K. Recently, the French, Hungarian and British governments have made various attempts at weakening or removing encryption. Therefore, with this global challenge facing insurers, they are required to be completely aware of what is at risk for them, and they must help pave a path forward that endeavors to balance profitability of products (like cyber liability and technology E&O) with the protection those products should afford any insured.

See also: Best Practices in Cyber Security

Apple, perhaps, serves as the best example of how governmental interference with cybersecurity is an issue that requires direct and immediate intervention from insurers. There are thousands of businesses around the world that rely on the iPhone and iPad for productivity purposes — and almost all of those businesses also rely on the security that those devices provide, both from a hardware and a software standpoint. Recently, the U.S. government attempted to force Apple, in different judicial battles, to write code that will allow the government to have a master key to access the data of any iPhone. However, the U.S government is also pursuing a legislative avenue to pass a law that will force U.S. companies to give the U.S. government unfettered retrieval of any data on which it sets its sight.

To provide such access would almost always require companies to write software code that is purposefully compromised from a security standpoint. It would be extremely unwise for professional insurance companies to assume this disagreement is only between the technology sector and world governments because, if there is an outcome favorable for the U.S. government, it will have direct and immediately negative effects on insurers that offer cyber liability and technology E&O insurance in the U.S., and it will set a dangerous precedent that will embolden other governments to justify similar breaches that will allow them to acquire what should be secure data.

From a cyber liability standpoint, any vulnerability in software code gives hackers another way to compromise a victim’s computers and network. If a company like Apple (which has thousands of businesses depending on it to keep them safe) has to create a master key, then all of the businesses that use Apple products will be vulnerable to attack. The U.S. government has a long history of being unable to keep its own data safe, which means, in time, hackers will be able to figure out what entrance point was created and then exploit it. The most worrisome entities that might access the backdoor would be non-democratic nation-states because they have the most to gain from exploiting any vulnerabilities in U.S-based companies. However, such companies are not the only ones who use products produced by Apple, which means companies located anywhere would also be vulnerable. Additionally, if world governments put restraints on encryption to make it illegal or to limit the ways data can be encoded then, again, that gives power to those entities that would exploit weak encipherment to the detriment of the private sector.

From a technology E&O standpoint, any request by the U.S. government to weaken products produced by an insured creates a breach of contract, which will hurt claims made against technology E&O policies. If Foxconn, which builds the iPhone for Apple, was forced to alter firmware used in the iPhone to allow at least one software flaw, then Apple could sue Foxconn for a breach of contract were Apple to learn of Foxconn obeying a government order to create a security bypass in the firmware code. Worse yet would be a company like FireEye being forced to reduce the effectiveness of its virtual execution engines that are at the heart of its malware analysis appliances. FireEye, and other cyber security companies, are what often stand between a hacker and its victim. Should a cybersecurity company ever be forced to obey a government order, little would stand between a hacker and its potential victims. Moreover, all of the companies that depend on the products of a cybersecurity company would also be in a position to bring claims against the insured organization, which would certainly be detrimental to technology E&O insurers.

To defend itself and its products from government interference, Apple is implementing a security feature that removes its ability to bypass the iPhone’s security. While such method works from a simplicity standpoint, it will not work for a majority of technology companies, with cybersecurity and cloud providers being two examples of where such a solution would not work. Additionally, if a law were passed that forced a company by way of a court order, for example, to decrypt information on its products, then the company so ordered would be put into a bind. Cyber liability and technology E&O insurers could also add exclusions to policies that would void insurance contracts if an insured organization complied with a governmental request to create a backdoor.

However, it would be extremely difficult for an insurer to prove the backdoor was created deliberately, and, ultimately, such exclusions would be ethically ambiguous given they would punish an insured firm for obeying the rule of law. Companies could also contest each governmental request, assuming no law makes it illegal to deny a government request, but not all companies have the time or financial resources with which to fight a government. The only reasonable avenue to rein in disruptive governmental orders, then, is for insurers, technology companies and others to unite and block any legislative attempt to pass a law that would force any technology company to create a security gap. Moreover, the resistance movement will also need to fight against any attempt to weaken or make illegal any type of encryption.

See also: Paradigm Shift on Cyber Security

Currently, the relationship that exists between the insurance and technology sectors is that of provider and client, but that relationship must now evolve into a partnership. The technology sector cannot afford to go without cyber liability and technology E&O insurance because almost every company needs to offset technological risk now that we are in a globally connected and highly litigious age. Insurers also need to continue offering cyber liability and technology E&O policies because they have the clout and financial strength to help protect companies — especially small- and medium-sized ones — from an ever-changing technological landscape. Then, too, whichever insurer develops a realistic understanding of the intersection of risk and technology will be in a position to enrich itself.

The path forward, then, is to create a coalition whose first goal would be to stay on top of both pending and current judicial cases and bills being drafted or voted on in any legislature worldwide that would degrade the security strength of any member’s product. The U.S. government has recently tried to force Apple to create a master key to one of its product lines, and there is no reason to believe that it will not force other companies (like cloud providers) to build similar backdoors into their products. To work against such actions, the coalition might be composed of two representatives from each sector’s main representative organization. For instance, for the professional insurance sector that would be PLUS, and for technology companies that would be IEEE.

Furthermore, the coalition might also be composed of members from automotive manufacturers, educators and telecommunication firms. The coalition’s protective approach, then, would be to identify cases or bills and then attempt to bring all resources forward to eliminate or mitigate the offending threat. A recent example on the judicial side of a case that would have been a threat to the putative coalition was the Apple vs. the U.S. government in Central District of California, Eastern Division. A current example of a legislative threat to the coalition is the Burr-Feinstein Anti-Encryption draft that seeks to allow courts to order a company to decrypt information it has encoded, like the way the iPhone protects a user’s data.

In a judicial case, the main measure could be filing amicus curiae briefs on the part of the aggrieved organization, but another measure might be ensuring the defendant is crafting the most reasonably persuasive anti-governmental interference arguments and appealing unfavorable rulings. On the legislative front, measures might include lobbyists but, more importantly, ought to involve the unity achieved by the existence of the coalition, working with an organization like the EFF and even creating public relation campaigns to appeal to the support of the world populace. In the rare instances when a government attempts to work with the private sector to understand the concerns that it has — for instance, as the U.S. government is trying to do with the proposed “Digital Security Commission” — then the coalition would need to support such efforts as much as possible.

It is true that the coalition’s efforts in countries like China and Russia might be limited, and they will be also be limited when a country feels that a criminal act, like terrorism, is better dealt with by eroding encryption and cybersecurity measures. In an instance concerning China, insurers could consider increasing the amount of re-insurance that they purchase on their cyber liability and technology E&O portfolios to offset the damage from increased claims. Insurers will also need to be extremely cautious when providing cyber liability and technology E&O coverage to organizations that have close relationships with non-democratic governments (like the Chinese government) or ones that produce products that have a high likelihood of being the result of IP theft, such as any mid- to high-end binary processor.

The pursuit of the best encryption and cybersecurity measures needs to be unencumbered by the efforts of any government, just as alarm systems have been free to evolve over the past two or three millennia. This can only be achieved, though, through the unified actions and vigilance of a coalition. Encryption and resilient cybersecurity frameworks are the essential and irreplaceable elements in a safely connected world. To limit, in any way, the efforts to perfect those elements or to purposefully reduce their effectiveness is irresponsible regardless of whether the reason is national security or the pursuit of breaking a criminal enterprise. Lloyds, and other organizations involved with cyber liability and technology E&O insurance, see a future where insurers are able to achieve healthy profits off those two products. However, if insurers do not responsibly oppose governmental attacks on encryption and cybersecurity, that profitable future will give way to a future of excessive claims, damaging losses and very little profit.

The Questions That Aren’t Being Asked

In Aldous Huxley’s 1931 novel Brave New World, many original ideas were posited about a futuristic society. Two of those ideas, appearing in our present, involve eugenics and an ever-increasing reliance on technology.

Techniques like CRISPR (clustered regularly interspaced short palindromic repeats) to genetically engineer a human embryo, and technological advances like self-driving vehicles, could be said to represent some of Huxley’s notions. However, professional liability underwriters, especially those underwriting cyber liability and tech E&O, are out of phase with this “brave new world,” and this fact creates a dangerous situation for both those underwriters and an economic world dependent on them. To be responsible and successful in the present and into the future, the professional liability insurance sector must look backward to look forward and, in so doing, create a breed of underwriters who are every bit as creative as the future will be.

Being out of sync with present-day reality is clearly represented in questions not asked on cyber liability and tech E&O applications. For instance, one current cyber liability application does not ask what type of firewall an applicant is using. A company can use a simple device with a firewall feature and claim to have a firewall in place, but that device will not come close to equaling the protection offered by a hardware-based NGFW, or Next Generation Firewall. The same application also does not ask if multiple hardware and software ecosystems are used, even though the answer to that question, especially for a medium-sized and large business, offers significant insight into the company’s cyber security approach. Additionally, this particular application does not ask whether an applicant is using the services of a cyber security firm. Those kinds of questions, and the answers to them, convey an enormous amount of information about the cyber security posture of an applicant and, in turn, provide significant insight into whether a risk is worth underwriting and at what cost. For such questions to be missing from an application is dangerous for insurance companies and the clients of those companies.

See Also: Space, Aviation Risks and Higher Education

The current situation with technology E&O applications is equally worrisome. For example, in the exclusions list on one recently updated technology E&O policy there is no exclusion for computer languages known to be highly prone to cyber breaches. Theoretically, an insured software company could be writing code in Adobe Flash or Java Script, languages that should be avoided. By not excluding those languages, the insurer is exposed to adverse results of claims and lawsuits caused by an insured using hazardous script. Perhaps even worse, this insurer does not exclude wireless products that do not include proper encryption. Thus, if a company that produces baby monitors creates a product that broadcasts the signal in an unencrypted format, claims could arise from a concerned consumer of that product. After all, what reasonable parent would allow anyone to spy on her child?

This issue is likely even worse because, time and again, successful lawsuits have already been brought against manufacturers of products that lack proper wireless encryption. The absence of such exclusions to protect itself and to encourage better behavior from its insureds calls into question whether a technology E&O insurer is in sync both with technology and the current legal environment. With underwriters being out of step in the present, one must wonder how they will be able to help drive the world forward in the future.

There are other parts of the professional insurance sphere that are not poised well to be in harmony with the future. In the near future, robots will be introduced into social environments like nursing homes. If a robot injects medication into a patient, prescribes a medication or lifts a patient from a wheelchair to a bed, then that takes an already risky situation into an unexplored legal realm. If a patient suffers an adverse reaction to a drug that was injected by a robot, then how will the nursing home be protected by any of its insurance policies? Or, what if a robot is provided by the nursing home to a patient who needs companionship? If the robot malfunctioned and could not be replaced and the patient drew into a depressed state and died, then how would insurance cover a wrongful death suit by the patient’s family? A general liability policy certainly would not cover such an event, and an allied health policy is not currently worded to handle such a risk. What about the manufacturer of that robot? Would a technology E&O policy step forward and indemnify the manufacturer of the robot?

Most countries, especially those like China, Japan and the U.S., have populations that possess far more elderly people than younger ones, and there are simply not enough people entering the field of senior care to handle the influx of those who need care in their golden years. This means that robotic companies are going to be filling that void and, in so doing, will create an unprecedented situation that will require the professional insurance sector to provide guidance and protection to the rapidly aging world. To provide that guidance and protection, however, will require professional underwriters to understand the intersection of technology, human care and the law, an intersection with which underwriters are currently less than conversant.

So how do insurance companies offering cyber liability, technology E&O and other professional insurance get into sync with the evolving world they are underwriting? There was once an international competition that encouraged students in the seventh through twelfth grades to form groups of two or three people and build educational websites. The competition was known as ThinkQuest. It was supported by both governmental and private organizations, had strong support from educators in more than thirty countries and rewarded the most successful competitors with scholarships of as much as $25,000. A similar approach must now be embraced and championed by the insurance industry. The brilliance of ThinkQuest was that it brought together young people who could appreciate and understand a multitude of ideas, numerous bodies of knowledge and people who were willing to learn and teach at the same time and who could convey their ideas both by the written word and binary. The spectrum of ideas that the groups put forth ranged from examining a social phenomenon like Harry Potter to examining how music affects people’s mental and physical health.

To be able to fully appreciate and understand nearly every cyber liability and technology E&O risk requires people who have an uncommon breadth and depth of knowledge that extends from simple areas like grammar to complex areas like quantum mechanics. When an underwriter tries to underwrite a risk like SSA (space situational awareness), to underwrite a risk in which a company produces electronic-photopic chips or to understand memory-resistant malware, that requires a degree of understanding that is clearly not being demonstrated by the majority of the current breed of underwriters. However, the degree of wide-ranging creativity needed here was what the ThinkQuest competitions were created to foster in young people. The insurance industry needs people who can draw from a wide range of knowledge, and it also needs people who can write binary code with exactitude. Insurance companies must employ cyber forensic engineers who can pinpoint where a security breach happened, how an intruder gained access to additional computers and how to remedy the situation.

Being able to work individually or in a team, being able to backtrack to the point of intrusion and being able to view the world in tangible and non-tangible ways requires more than someone who can simply write one line of code after another. Currently, insurance companies depend on other companies to investigate data breaches, but this will not work out in the long run. In the 20th century, numerous insurance companies owned law firms to litigate claims economically. The 21st century will require cyber liability insurers to employ cyber forensic engineers to investigate claims based on network breaches. Moreover, in the very near future insurers will need to create an organization that tests routers, switches, servers, smart phones, robots and other technology devices to determine how secure or how capable those devices are. As has already been argued on the PLUS Blog in November 2015, not all technology devices are created with the same expertise, and figuring out which devices are least and most secure will greatly facilitate insurers’ ability to price policies correctly. However, to find young people who can view the computer realm in multiple dimensions, and to find those who can function in a cross-disciplinary environment and approach a risk from a multitude of angles can only be successfully accomplished on a large scale through an instructional competition.

People who have a broad and deep appreciation for multiple disciplines and cyber forensic engineers are uncommon, and insurance companies are not the only ones who need such thinkers. cyber security companies, law firms, private and public educational organizations, research organizations, think tanks and governments are just a few sectors that need those type of people. This means that, as difficult as it is already to find thoughtful insurance people knowledgeable about the cyber world, the future is only going to be exponentially more troublesome.

When the 20-year-old who is going into her senior year at college thinks about the past and future, what will she strongly consider for a career? Will she remember the competitions that the insurance industry hosted that allowed her to cultivate friends from all over the world, and allowed her to gain the needed assurance in her skills as a programmer or a writer to pursue a major in computer science or history? Will she remember the competitions that helped fund her time at college, and in doing all of that proved that being a cyber liability underwriter is a fulfilling career opportunity? Or will that 20-year-old have nothing to remember where the insurance sector is concerned?

The Cyber Security Challenge is one competition that currently aims to increase the pool of cyber forensic engineers; however, it is not an international competition and focuses only on people who are capable of becoming cyber forensic engineers. Professional liability insurers need thinkers and tinkerers, and locating both on a large scale can only be accomplished through a competition like ThinkQuest. Nano-technology, advanced robotics, augmented reality and memory-resident malware are elements of a brave new world that cyber liability and tech E&O insurers are going to come face-to-face with in the short term. In three to five years, insurers are going to encounter robots where none have been before. If insurers do not create and enthusiastically support a competition like ThinkQuest, then insurers will not be acknowledged or remembered by those in college. Consequently, insurers will find themselves without a breed of underwriters who can thrive and understand the brave future. This must not be so!