Tag Archives: her

Why 2017 Is the Year of the Bot

In the 2013 movie “Her,” Theodore Twombly, a lonely writer, falls in love with a digital assistant designed to meet his every need.  She sorts emails, helps get a book published, provides personal advice and ultimately becomes his girlfriend. The assistant, Samantha, is A.I. software capable of learning at an astonishing pace.

Samantha will remain in the realm of science fiction for at least another decade, but less functional digital assistants, called bots, are already here. These will be the most amazing technology advances we see in our homes in 2017.

Among the bestsellers of the holiday season were Amazon.com’s Echo and Google Home. These bots talk to their users through speakers, and their built-in microphones hear from across a room. When Echo hears the name “Alexa,” its LED ring lights up in the direction of the user to acknowledge that it is listening. It answers questions, plays music, orders Amazon products and tells jokes. Google’s Home can also manage Google accounts, read and write emails and keep track of calendars and notes.

Google and Amazon have both opened up their devices to third-party developers — who in turn have added the abilities to order pizza, book tickets, turn on lights and make phone calls. We will soon see these bots connected to health and fitness devices so that they can help people devise better exercise regimens and remember to take their medicine. And they will control the dishwasher and the microwave, track what is left in the refrigerator and order an ambulance in case of emergency.

See also: What Do Bots Mean for Insurance?  

Long ago, our home appliances became electrified. Soon, they will be “cognified”: integrated into artificially intelligent systems that are accessed through voice commands. We will be able to talk to our machines in a way that seems natural. Microsoft has developed a voice-recognition technology that can transcribe speech as well as a human and translate it into multiple languages. Google has demonstrated a voice-synthesis capability that is hard to differentiate from human. Our bots will tell our ovens how we want our food to be cooked and ask us questions on its behalf.

This has become possible because of advances in artificial intelligence, or A.I. In particular, a field called deep learning allows machines to learn through neural networks — in which information is processed in layers and the connections between these layers are strengthened based on experience. In short, they learn much like a human brain. As a child learns to recognize objects such as its parents, toys and animals, neural networks learn by looking at examples and forming associations. Google’s A.I. software learned to recognize a cat, a furry blob with two eyes and whiskers, after looking at 10 million examples of cats.

It is all about data and example; that is how machines — and humans — learn. This is why the tech industry is rushing to get its bots into the marketplace and are pricing them at a meager $150 or less: The more devices that are in use, the more they will learn collectively, and the smarter the technology gets.  Every time you search YouTube for a cute cat video and pick one to watch, Google learns what you consider to be cute. Every time you ask Alexa a question and accept the answer, it learns what your interests are and the best way of responding to your questions.

By listening to everything that is happening in your house, as these bots do, they learn how we think, live, work and play. They are gathering massive amounts of data about us. And that raises a dark side of this technology: the privacy risks and possible misuse by technology companies. Neither Amazon nor Google is forthcoming about what it is doing with all of the data it gathers and how it will protect us from hackers who exploit weaknesses in the infrastructure leading to its servers.

Of even greater concern is the dependency we are building on these technologies: We are beginning to depend on them for knowledge and advice and even emotional support.

The relationship between Theodore Twombly and Samantha doesn’t turn out very well. She outgrows him in intelligence and maturity. And she confesses to having relationships with thousands of others before she abandons Twombly for a superior, digital life form.

We surely don’t need to worry yet about our bots becoming smarter than we are. But we already have cause for worry over one-sided relationships. For years, people have been confessing to having feelings for their Roomba vacuum cleaners — which don’t create even an illusion of conversation. A 2007 study documented that some people had formed a bond with their Roombas that “manifested itself through happiness experienced with cleaning, ascriptions of human properties to it and engagement with it in promotion and protection.” And according to a recent report in New Scientist, hundreds of thousands of people say “Good morning” to Alexa every day, half a million people have professed their love for it, and more than 250,000 have proposed marriage to it.

See also: Top 10 Insurtech Trends for 2017  

I expect that we are all going to be suckers for our digital friends. Don’t you feel obliged to thank Siri on your iPhone after it answers your questions? I do, and have done so.

What Liabilities Do Robots Create?

The intersection of humanity and robots is being transported from the human imagination and formed into a tangible reality. Many books and movies like iRobot and Her have analyzed various potential impacts of that intersection, but the complete intersection will actually be that of humanity, robots and liability.

It is insufficient, however, to know that advanced robotics and liability will intersect. Advanced robotics is going to thrust upon insurers a world that is extremely different from the one they sought to indemnify in the 20th century. Already, drones and autonomous vehicles are forcing some parts of the insurance sector to try to determine where responsibility exists so that liability can be appropriately assigned, and those efforts will continue for at least the next decade.

The liability created by the combination of robots operating with humanity now falls on commercial, and especially professional, insurers to engineer robotic liability products to provide clients and the global economy with stability, while providing insurers a valuable stream of revenue.

There are some ground rules that must be considered before bringing robotic liability to life. First, what is the definition of a robot? For the purposes of this paper, Professor Ryan Calo’s definition of a robot will be used. According to the professor, a robot can sense, process and act on its environment. There is also the realization that currently it may be beyond human ability to create a unified robotic liability doctrine for insurance purposes. This is largely due to the environments in which robots will exist, as well as the ramifications of those environments from a legal, physical and practical standpoint. After all, drones capable of sustained flight are inherently going to exist in a different realm from ground-based autonomous vehicles, and the same is true for robots capable of sub-orbital and intra-planetary flight. Therefore, this paper is going to focus on a discrete part of robotic liability: those robots used in agricultural fields. Another reason for focusing on one area of robotics is to keep things simple while exploring this uncharted part of the insurance sector.

See also: Here Comes Robotic Process Automation

The farmer, the field and the harvest, the most commonplace of settings, provide an area where dimensions of robotic liability can be easily analyzed and understood. Plant husbandry draws on thousands of years of human knowledge, and it is already using aerial drones and big data analytics to maximize crop yields. Additionally, the agricultural arena has a high likelihood of being an area wherein robots cause significant shifts in multiple areas of the economy.

Within the next two or three years, a robot, like this paper’s fictional AARW (autonomous agriculture robotic worker), will be created and sent to the fields to begin to replace human labor when it comes time to harvest a crop. There are multiple reasons for this belief, starting with the advance of robotic technology. In 2015 the DARPA Robotics Challenge was held, and it demonstrated the deployment of an array of robots that will be the ancestors of a robot like AARW. In that competition, robots were required to walk on uneven terrain, accomplish tactile tasks and even drive a traditional vehicle. While the robots in that challenge were not largely or fully autonomous, they are the undeniable major step toward productive autonomous robots.

There are already simple machines that can perform a variety of functions, even learning a function by observing human movements, and the gap between the drawing board and reality is being quickly eroded with the tremendous amount of computer hardware and software knowledge that is produced by both private and public institutions each month.

Moreover, there are strong labor and economic incentives for the introduction of robots into the agricultural field. Robots are able to work non-stop for 12 hours, are free from any form of health and labor laws and can have life expectancies in the five- to 15-year range. Crops are, more often than not, planted in fields with straight rows and require only the robotic ability to pickup an item, like a watermelon, take it to a bin, deposit the melon in the bin and then repeat the same steps on the next watermelon. All this requires only a modest amount of know-how on the robot’s part.

If AARW is built to industrial quality standards, then it will only require a minimal amount of maintenance over the course of each year. And if AARW is powered using solar panels, then the cost of its fuel will be included in the robot’s purchase price, which means that the minor maintenance cost along with a possible storage cost will be the only operating costs of AARW. With its ability to work non-stop and with no overhead costs for complying with human health and labor laws, AARW will be a cheaper alternative to human workers, providing a strong economic incentive for farmers to use robots in the field.

An agricultural robot will, however, create unique exposures for a farmer, and those exposures will cultivate the need for robotic liability. Arguments can be made for completed operations/product liability and technology E&O exposures with AARW in the field. However, there are multiple reasons why it would be unwise to try to relegate liability for AARW to any current product.

First and foremost, there is a strong expectation among scholars and legal experts that robots are going to do unexpected things. Imagine: At harvest time, the farmer brings AARW to the field to collect the crop of watermelons. The field happens to be near a highway on which big rigs travel, and part of the field lies next to a blind corner in the highway. As AARW successfully harvests one row after another, the farmer’s attention drifts, and she begins talking with a neighbor. Suddenly, there is a screech of tires and a loud bang as a big rig slams into AARW, which, for an unknown reason, walked into the highway.

Who should bear responsibility for the untimely demise of AARW?

If AARW were a cow, then the insurer of the big rig would have to reimburse the farmer for the loss of one of her cows. In certain respects, AARW and a cow are the same in that they can sense, process and act upon their environment. However, a cow has what is often described as a mind of its own, which is why insurance companies and the law have come to place the fault of a rogue cow on the unwitting vehicle operator instead of the aggrieved farmer.

AARW, though, is not a cow. It is a machine created to harvest produce. Does the software that controls the robot’s actions equate to the free will of an animal, like a cow? The farmer who lost the cow does not demand her money back from the rancher who sold her a reckless bovine product. Why should the creator of the robot be expected to reimburse the farmer for the loss of AARW? How does it make sense for product liability to come into play when the rancher shares no blame for the indiscreet cow? Technology companies have been extremely successful at escaping liability for the execution of poorly crafted software, so the farmer is unlikely to find any remedy in bringing a claim against the provider of the software, even if it is a separate entity from the one that assembled AARW.

Regardless of where blamed is assigned, the issue would be awkward for insurers that tried to force the liability for the robot’s actions into any current insurance product. At worst, the farmer would not be made whole (technology E&O), and, at best, changing existing laws would likely only partially compensate the farmer for the loss of AARW.

See also: The Need to Educate on General Liability  

The liability waters are already murky without robotic liability. Machine learning will likely create situations that are even more unexpected than the above possibility. Imagine if AARW imitated the farmer in occasionally giving free produce samples to people passing the field. In the absence of robotic liability insurance, who should be responsible for a mistake or offending action on the robot’s part?

It would be unfortunate to place all of the blame on AARW or the farmer. The situations also call into question the quality of programming with which the robot was created. In the paper by M.C. Elise and Tim Hwang, “Praise the Machine! Punish the Human!” historical evidence makes it unwise to expect liability to be appropriately adjudicated were a farmer to sue the creator of AARW.

With an autonomous robot like AARW, it is possible to bring into consideration laws related to human juveniles. A juvenile is responsible if she decides to steal an iPad from a store, but, if she takes the family Prius for a joyride, then the parents are responsible for any damage the juvenile causes. Autonomous robots will inherently be allowed to make choices on their own, but should responsibility apply to the robot and the farmer as it does in juvenile law for a child and a parent?

From the insurer’s standpoint it makes sense to assign responsibility to the appropriate party. If AARW entered a highway, the responsibility should fall on the farmer, who should have been close enough to stop it. Giving away produce, which could be petty thievery, is wrong and, because AARW incorrectly applied an action it learned, it remains largely responsible.

To more fairly distribute blame, it may be worthwhile for robotic liability to contain two types of deductible. One would be the deductible paid when 51% of the blame were due to human negligence, and such a deductible would be treble the second deductible that would apply if 51% of the blame were due to an incorrect choice on the robot’s part. This would help to impress on the human the need to make responsible choices for the robot’s actions, while also recognizing that robots will sometimes make unexpected choices, choices that may have been largely unforeseeable to human thinking. Such assignment of responsibility should also have a high chance of withstanding judicial and underwriting scrutiny.

Another disservice to relegating robots to any existing form of liability is in the form of underwriting expertise. Currently, most insurers that offer cyber liability and technology E&O seem to possess little expertise about the intersection of risk and technology. That lack hurts insurers and their clients, who suffer time and again from inadequate coverage and unreasonable pricing. It would be advantageous to create robotic liability that would be unencumbered by such existing deficiencies. By establishing a new insurance product and entrusting it to those who do understand the intersection of humans, liability and robots, insurers will be able to satisfy the demands of those who seek to leverage robots while also establishing a reliable stream of new revenue.

A 21st century product ought to be worthy of a 21st century insurance policy.

Another aspect of exposure that needs to be considered is in how a robot is seen socially, something that professor Calo discusses in his paper “Robotics and the Lessons of Cyberlaw.” Robots are likely to be viewed as companions, or valued possessions, or perhaps even friends.

At the turn of the last century, Sony created an experimental robotic dog named Aibo. Now a number of Aibos are enjoying a second life due to the pleasure people in retirement homes experience when interacting with them. One of the original Sony engineers created his own company just to repair dysfunctional Aibos.

While that particular robot is fairly limited in its interactive abilities, it provides an example of how willing people are to consider robots as companions instead of mechanical tools with limited value. It is more than likely that people will form social bonds with robots. And, while it is one thing to be verbally annoyed at a water pump for malfunctioning and adding extra work to an already busy day, mistreatment of a robot by its employer may be seen and felt differently by co-workers of the robot. Some people already treat a program like Apple’s Siri inappropriately. People to tell Siri that it is sexy, ask what it “likes” in a romantic sense and exhibit other behaviors toward the program, even in a professional setting, that are inappropriate. While such behavior has not resulted yet in an EPL (employment practices liability) claim, such unwarranted behavior may not be tolerated.

Consequently, the additional exposures created by a robot’s social integration into human society will more than likely result in adding elements to an insurance claim that products liability, technology E&O and other current insurance products would be ill-suited to deal with.

See also: Of Robots, Self-Driving Cars and Insurance

Advanced robotics makes some of the future murky. Will humans be able to code self-awareness into robots? Are droid armies going to create more horrific battlegrounds than those created by humans in all prior centuries? Are autonomous vehicles the key to essentially eliminating human fatalities?

However useful those kinds of questions are, the answer to each, for the foreseeable future, is unknown. What we do know for sure is that the realm of advanced robotics is starting to move from the drawing board and into professional work environments, creating unexplored liability territory. Accordingly, the most efficient way to go into the future is by creating robotic liability now because, with such a product, insurers have the ability to both generate a new stream of revenue while at the same time providing a more economically stable world.