An Ethics Question on Driverless Cars

Driverless cars will have to choose between evils when an accident is imminent, posing thorny ethics issues for makers and for insurers.

|

Self-driving cars will transform personal travel and, in doing so, will pose some interesting questions for insurers. One question that insurers seem not to have addressed so far involves the ethical issues raised by self-driving cars. And there's one particular issue of ethics that could have a significant influence on the liability exposures presented by self-driving cars.

Picture yourself relaxing back in a self-driving car. You've just dropped off your son, who has run off along the pavement ahead of you. Your car pulls out and accelerates, but suddenly six cyclists swerve into its path. A collision is imminent, and your self-driving car's computer has to make a split-second decision. Should the car swerve out of the way of the cyclists, saving their lives, but in doing so mount the pavement and kill your son? Or should it carry on and plow into the cyclists, saving your son's life?

Remember that the decision isn't yours: It's to be taken by your self-driving car's computer. Should the computer be programmed to reduce the overall number of casualties (and so avoid the cyclists but kill your son), or should it be programmed to put your interests first (and so collide with the cyclists)?

Classic ethical scenario

Some of you will recognize this as one of the classic scenarios used to stir debate in philosophy and ethics. It illustrates two ethical positions: utilitarianism and deontology. The former would say to swerve, for six lives are saved at the cost of one. The latter would say "carry on," for your interests are being put first.

The purely financial implications for insurers are clear: A self-driving car programmed according to utilitarian ethics will carry a lower liability exposure than one programmed according to deontological ethics.  Will we see insurers turning to philosophers for help in deciding which car models fall into which rating categories?

Programmed by humans

The key point here, though, is not the employment prospects of philosophers but the recognition that all those algorithms underpinning the decisions made by self-driving cars will be programmed by human beings. Just like you and me, they'll be full of opinions and preconceptions, which will in turn influence the preferences coded into the decisions your self-driving car will take.

And as the big data supporting those decisions builds, so will the complexities that those algorithms have to handle. For example, if the six cyclists were wearing health tracking devices that told your self-driving car's computer that they were all octogenarians, should it still swerve into the path of your only child?

Embedding choices

The permutations are endless, but one dimension is fixed. It is that insurers using big data for underwriting and claims decisions need to recognize that choices are going to be embedded into those algorithms, and those choices often have an ethical dimension that needs to reflect the values of that insurer and the needs of the regulatory framework it operates within. Simply saying, as some insurers now do, that "it was the data that made the decision" will not hold water.


Duncan Minty

Profile picture for user DuncanMinty

Duncan Minty

Duncan Minty is an independent ethics consultant with a particular interest in the insurance sector. Minty is a chartered insurance practitioner and the author of ethics courses and guidance papers for the Chartered Insurance Institute.

MORE FROM THIS AUTHOR

Read More