December 13, 2016
What Liabilities Do Robots Create?
by Jesse Lyon
Advanced robotics is going to thrust upon insurers a world that is extremely different from the one they sought to indemnify in the 20th century.
The intersection of humanity and robots is being transported from the human imagination and formed into a tangible reality. Many books and movies like iRobot and Her have analyzed various potential impacts of that intersection, but the complete intersection will actually be that of humanity, robots and liability.
It is insufficient, however, to know that advanced robotics and liability will intersect. Advanced robotics is going to thrust upon insurers a world that is extremely different from the one they sought to indemnify in the 20th century. Already, drones and autonomous vehicles are forcing some parts of the insurance sector to try to determine where responsibility exists so that liability can be appropriately assigned, and those efforts will continue for at least the next decade.
The liability created by the combination of robots operating with humanity now falls on commercial, and especially professional, insurers to engineer robotic liability products to provide clients and the global economy with stability, while providing insurers a valuable stream of revenue.
There are some ground rules that must be considered before bringing robotic liability to life. First, what is the definition of a robot? For the purposes of this paper, Professor Ryan Calo’s definition of a robot will be used. According to the professor, a robot can sense, process and act on its environment. There is also the realization that currently it may be beyond human ability to create a unified robotic liability doctrine for insurance purposes. This is largely due to the environments in which robots will exist, as well as the ramifications of those environments from a legal, physical and practical standpoint. After all, drones capable of sustained flight are inherently going to exist in a different realm from ground-based autonomous vehicles, and the same is true for robots capable of sub-orbital and intra-planetary flight. Therefore, this paper is going to focus on a discrete part of robotic liability: those robots used in agricultural fields. Another reason for focusing on one area of robotics is to keep things simple while exploring this uncharted part of the insurance sector.
See also: Here Comes Robotic Process Automation
The farmer, the field and the harvest, the most commonplace of settings, provide an area where dimensions of robotic liability can be easily analyzed and understood. Plant husbandry draws on thousands of years of human knowledge, and it is already using aerial drones and big data analytics to maximize crop yields. Additionally, the agricultural arena has a high likelihood of being an area wherein robots cause significant shifts in multiple areas of the economy.
Within the next two or three years, a robot, like this paper’s fictional AARW (autonomous agriculture robotic worker), will be created and sent to the fields to begin to replace human labor when it comes time to harvest a crop. There are multiple reasons for this belief, starting with the advance of robotic technology. In 2015 the DARPA Robotics Challenge was held, and it demonstrated the deployment of an array of robots that will be the ancestors of a robot like AARW. In that competition, robots were required to walk on uneven terrain, accomplish tactile tasks and even drive a traditional vehicle. While the robots in that challenge were not largely or fully autonomous, they are the undeniable major step toward productive autonomous robots.
There are already simple machines that can perform a variety of functions, even learning a function by observing human movements, and the gap between the drawing board and reality is being quickly eroded with the tremendous amount of computer hardware and software knowledge that is produced by both private and public institutions each month.
Moreover, there are strong labor and economic incentives for the introduction of robots into the agricultural field. Robots are able to work non-stop for 12 hours, are free from any form of health and labor laws and can have life expectancies in the five- to 15-year range. Crops are, more often than not, planted in fields with straight rows and require only the robotic ability to pickup an item, like a watermelon, take it to a bin, deposit the melon in the bin and then repeat the same steps on the next watermelon. All this requires only a modest amount of know-how on the robot’s part.
If AARW is built to industrial quality standards, then it will only require a minimal amount of maintenance over the course of each year. And if AARW is powered using solar panels, then the cost of its fuel will be included in the robot’s purchase price, which means that the minor maintenance cost along with a possible storage cost will be the only operating costs of AARW. With its ability to work non-stop and with no overhead costs for complying with human health and labor laws, AARW will be a cheaper alternative to human workers, providing a strong economic incentive for farmers to use robots in the field.
An agricultural robot will, however, create unique exposures for a farmer, and those exposures will cultivate the need for robotic liability. Arguments can be made for completed operations/product liability and technology E&O exposures with AARW in the field. However, there are multiple reasons why it would be unwise to try to relegate liability for AARW to any current product.
First and foremost, there is a strong expectation among scholars and legal experts that robots are going to do unexpected things. Imagine: At harvest time, the farmer brings AARW to the field to collect the crop of watermelons. The field happens to be near a highway on which big rigs travel, and part of the field lies next to a blind corner in the highway. As AARW successfully harvests one row after another, the farmer’s attention drifts, and she begins talking with a neighbor. Suddenly, there is a screech of tires and a loud bang as a big rig slams into AARW, which, for an unknown reason, walked into the highway.
Who should bear responsibility for the untimely demise of AARW?
If AARW were a cow, then the insurer of the big rig would have to reimburse the farmer for the loss of one of her cows. In certain respects, AARW and a cow are the same in that they can sense, process and act upon their environment. However, a cow has what is often described as a mind of its own, which is why insurance companies and the law have come to place the fault of a rogue cow on the unwitting vehicle operator instead of the aggrieved farmer.
AARW, though, is not a cow. It is a machine created to harvest produce. Does the software that controls the robot’s actions equate to the free will of an animal, like a cow? The farmer who lost the cow does not demand her money back from the rancher who sold her a reckless bovine product. Why should the creator of the robot be expected to reimburse the farmer for the loss of AARW? How does it make sense for product liability to come into play when the rancher shares no blame for the indiscreet cow? Technology companies have been extremely successful at escaping liability for the execution of poorly crafted software, so the farmer is unlikely to find any remedy in bringing a claim against the provider of the software, even if it is a separate entity from the one that assembled AARW.
Regardless of where blamed is assigned, the issue would be awkward for insurers that tried to force the liability for the robot’s actions into any current insurance product. At worst, the farmer would not be made whole (technology E&O), and, at best, changing existing laws would likely only partially compensate the farmer for the loss of AARW.
See also: The Need to Educate on General Liability
The liability waters are already murky without robotic liability. Machine learning will likely create situations that are even more unexpected than the above possibility. Imagine if AARW imitated the farmer in occasionally giving free produce samples to people passing the field. In the absence of robotic liability insurance, who should be responsible for a mistake or offending action on the robot’s part?
It would be unfortunate to place all of the blame on AARW or the farmer. The situations also call into question the quality of programming with which the robot was created. In the paper by M.C. Elise and Tim Hwang, “Praise the Machine! Punish the Human!” historical evidence makes it unwise to expect liability to be appropriately adjudicated were a farmer to sue the creator of AARW.
With an autonomous robot like AARW, it is possible to bring into consideration laws related to human juveniles. A juvenile is responsible if she decides to steal an iPad from a store, but, if she takes the family Prius for a joyride, then the parents are responsible for any damage the juvenile causes. Autonomous robots will inherently be allowed to make choices on their own, but should responsibility apply to the robot and the farmer as it does in juvenile law for a child and a parent?
From the insurer’s standpoint it makes sense to assign responsibility to the appropriate party. If AARW entered a highway, the responsibility should fall on the farmer, who should have been close enough to stop it. Giving away produce, which could be petty thievery, is wrong and, because AARW incorrectly applied an action it learned, it remains largely responsible.
To more fairly distribute blame, it may be worthwhile for robotic liability to contain two types of deductible. One would be the deductible paid when 51% of the blame were due to human negligence, and such a deductible would be treble the second deductible that would apply if 51% of the blame were due to an incorrect choice on the robot’s part. This would help to impress on the human the need to make responsible choices for the robot’s actions, while also recognizing that robots will sometimes make unexpected choices, choices that may have been largely unforeseeable to human thinking. Such assignment of responsibility should also have a high chance of withstanding judicial and underwriting scrutiny.
Another disservice to relegating robots to any existing form of liability is in the form of underwriting expertise. Currently, most insurers that offer cyber liability and technology E&O seem to possess little expertise about the intersection of risk and technology. That lack hurts insurers and their clients, who suffer time and again from inadequate coverage and unreasonable pricing. It would be advantageous to create robotic liability that would be unencumbered by such existing deficiencies. By establishing a new insurance product and entrusting it to those who do understand the intersection of humans, liability and robots, insurers will be able to satisfy the demands of those who seek to leverage robots while also establishing a reliable stream of new revenue.
A 21st century product ought to be worthy of a 21st century insurance policy.
Another aspect of exposure that needs to be considered is in how a robot is seen socially, something that professor Calo discusses in his paper “Robotics and the Lessons of Cyberlaw.” Robots are likely to be viewed as companions, or valued possessions, or perhaps even friends.
At the turn of the last century, Sony created an experimental robotic dog named Aibo. Now a number of Aibos are enjoying a second life due to the pleasure people in retirement homes experience when interacting with them. One of the original Sony engineers created his own company just to repair dysfunctional Aibos.
While that particular robot is fairly limited in its interactive abilities, it provides an example of how willing people are to consider robots as companions instead of mechanical tools with limited value. It is more than likely that people will form social bonds with robots. And, while it is one thing to be verbally annoyed at a water pump for malfunctioning and adding extra work to an already busy day, mistreatment of a robot by its employer may be seen and felt differently by co-workers of the robot. Some people already treat a program like Apple’s Siri inappropriately. People to tell Siri that it is sexy, ask what it “likes” in a romantic sense and exhibit other behaviors toward the program, even in a professional setting, that are inappropriate. While such behavior has not resulted yet in an EPL (employment practices liability) claim, such unwarranted behavior may not be tolerated.
Consequently, the additional exposures created by a robot’s social integration into human society will more than likely result in adding elements to an insurance claim that products liability, technology E&O and other current insurance products would be ill-suited to deal with.
Advanced robotics makes some of the future murky. Will humans be able to code self-awareness into robots? Are droid armies going to create more horrific battlegrounds than those created by humans in all prior centuries? Are autonomous vehicles the key to essentially eliminating human fatalities?
However useful those kinds of questions are, the answer to each, for the foreseeable future, is unknown. What we do know for sure is that the realm of advanced robotics is starting to move from the drawing board and into professional work environments, creating unexplored liability territory. Accordingly, the most efficient way to go into the future is by creating robotic liability now because, with such a product, insurers have the ability to both generate a new stream of revenue while at the same time providing a more economically stable world.