Tag Archives: cass sunstein

Time for a ‘Nudge’ on Long-Term Care

For years, the go-to approach with clients for discussing long-term care insurance (LTCi) solutions has been from an educational perspective. The idea was that if we could just get a prospective customer to lower his guard long enough to understand the strong statistical rationale or risk in favor of LTCi, the decision would become clear to him, and coverage would be purchased.

As logical as all that sounds, maybe our logic is flawed? The reality we’re facing in the LTCi industry is that this approach is likely not the most effective way to lead Americans into action on LTCi. What we’ve experienced is that, despite our best efforts and compelling factual arguments in favor of LTCi, adoption rates have consistently hovered around 8%, which corresponds with the percentage of the population that is predisposed to long-term planning by nature.

So why isn’t the traditional approach to planning for LTCi working for the other 92%? The answer, it turns out, might be found in some fascinating recent research into “behavioral economics,” which considers economic decision making from a psychological perspective. Best-selling books such as Nudge (Richard Thaler and Cass Sunstein) and Thinking Fast and Slow (Daniel Kahneman) have explored the ramifications of this fascinating topic.

The idea is that people really don’t act rationally, as classical economics assumes. Instead, people are motivated to act based on their emotions and impulses. Moreover, the choices we make are very dependent on how options are presented to us.

See also: Can Long-Term Care Insurance Survive?

Companies and governments have recently used the findings of behavioral economics to try and “nudge” people to take actions. For example, more companies now auto-enroll employees in 401(k) plans and make them opt-out if they don’t want to join. The result has been a big increase in 401(k) participation. Another finding—that too many choices lead to inaction—has led to a narrowing of investment options. Similarly, “default” choices, such as target date funds, are now part of many 401(k) plans.

Here are six ways in which the findings of behavioral economics can help improve your closing rate when doing LTCi planning with clients:

  1. Keeping choices as simple as possible. As an adviser, you may think your job is to give a possible buyer multiple options for planning for care, such as spread sheeting several insurance carriers or comparing standalone and linked products. However, the reality is that consumers don’t want this—they want a recommendation with just a few choices. Share your due diligence, but limit the information to what you consider the best options for them to consider.
  2. Focus on the possible gain LTC will provide instead of the possible loss. Research has shown that, just like gamblers, we all want to win, and we don’t like to think about losing. People who are considering LTCi don’t want to think about loss when planning for care, such as how their retirement savings may be depleted. Instead, focus on the fact that a small LTCi premium gives the policyholder the possibility of a big payoff in benefits. For example, a $2,000 annual premium could result in $300,000 to pay for high-quality care at home.
  3. Use stories, not statistics! Statistics are important for discovering trends and insights, but they are awful when used for discussing LTC planning. People are way too optimistic about their future and think they will be on the winning side of a statistic. Focusing on stories and experience that motivate prospects is much better than using statistics that can destroy empathy when talking about planning for LTC.
  4. Focus on “now” benefits, not the future.  It’s incredibly difficult for people to imagine aging and needing help. Instead, focus on the “now” benefits of LTCi.  The now benefits are more difficult to quantify, but they can include peace of mind, good health underwriting and locking into a lower premium before a birthday.
  5. Help guide heuristics (rules of thumb). For analytical advisers, it’s tempting to use tools such as cost-of-care surveys that project the cost of care 40 years in the future when designing plans. A better approach is to “follow the crowd” and recommend benefits similar to what policyholders are actually buying. You may think people want customized solutions, but most would feel more comfortable picking options similar to other buyers. Recommend they do what most people are doing.
  6. “Nudge” a choice.  When people have to make a decision, such as actively signing off on the fact they have been offered LTCi but declined, they will be more likely to buy. LTC planning is easy to delay, and people need motivations to keep them from delaying the decision forever.

See also: Long Term Care Insurance: Group plan vs Individual

Behavioral economics is a controversial topic, but we think it offers an important critique of the way we have traditionally approached LTCi planning with prospective clients. Employing some of its findings might move us beyond the 8% threshold of highly motivated long-term planners to help the remaining 92% of the population engage in meaningful consideration of their long-term care needs.

2 Heads Are Better Than 1, Right?

Everybody knows that two heads are better than one. We’ve known it since kindergarten, where we were taught that cooperation, collaboration and teamwork are not just socially desirable behaviors-they also help produce better decisions. And while we all know that two or more people working together are more likely to solve a problem or identify an opportunity better than one person doing it alone, it turns out that’s only true sometimes.

Ideally, a group’s collective intelligence, its ability to aggregate and interpret information, has the potential to be greater than the sum of the intelligence of the individual group members. In the 4th century B.C., Aristotle, in Book III of his political philosophy treatise Politics, described it this way: “When there are many who contribute to the process of deliberation, each can bring his share of goodness and moral prudence…Some appreciate one part, some another, and all together appreciate all.”

But that’s not necessarily how it works in all groups, as anyone who has ever served on a committee and witnessed groupthink in action can probably testify.

Groups are as prone to irrational biases as individuals are, and the idea that a group can somehow correct for or cure the individual biases is false, according to Cass Sunstein, Harvard Law School professor and author (with Reid Hastie) of Wiser: Getting Beyond Groupthink to Make Groups Smarter. Interviewed by Sarah Green on the HBR Ideacast in December 2014, Sunstein said that individual biases can lead to mistakes but that “groups are often just as bad as individuals, and sometimes they are even worse.”

Biases can get amplified in groups. According to Sunstein, as group members talk with each other “they make themselves more confident and clear-headed in the biases with which they started.” The result? Groups can quickly get to a place where they have more confidence and conviction about a position than the individuals within the group do. Groups often lock in on that position and resist contrary information or viewpoints.

Researcher Julie A. Minson, co-author (with Jennifer S. Mueller) of The Cost of Collaboration: Why Joint Decision Making Exacerbates Rejection of Outside Information, agrees, suggesting that people who make decisions by working with others are more confident in those decisions and that the process of making a judgment collaboratively rather than individually contributes to “myopic underweighting of external viewpoints.” And even though collaboration can be an expensive, time-consuming process, it is routinely over-utilized in business decision-making simply because many managers believe that if, two heads are better than one, 10 heads must be even better.

Minson disagrees: “Mathematically, you get the biggest bang from the buck going from one decision-maker to two. For each additional person, that benefit drops off in a downward sloping curve.”

Of course, group decision-making isn’t simply a business challenge–our political and judicial systems rely and depend on groups of people such as elected officials and jurors to deliberate and collaborate and make important decisions. Jack Soll and Richard Larrick, in their Scientific American article You Know More than You Think, observed that while crowds are not always wise, they are more likely to be wise when two principles are followed: “The first principle is that groups should be composed of people with knowledge relevant to a topic. The second principle is that the group needs to hold diverse perspectives and bring different knowledge to bear on a topic.”

Cass Sunstein takes it further, saying for a group to operate effectively as a decision-making body (a jury, for instance) it must consist of:

  • A diverse pool of people
  • Who have different life experiences
  • Who are willing to listen to the evidence
  • Who are willing to listen to each other
  • Who act independently
  • Who refuse to be silenced

Does that sound like a typical decision-making group to you? When I heard that description, I immediately thought of Juror 8 (Henry Fonda) in “12 Angry Men”–a principled and courageous character who single-handedly guided his fractious jury to a just verdict. It is much harder for me to imagine our elected officials, or jury pool members, or even the unfortunate folks dragooned into serving on a committee or task force at work, as sharing those same characteristics.

The good news is that two heads are definitely better than one when those heads are equally capable and they communicate freely, at least according to Dr. Bahador Bahrami of the Institute of Cognitive Neuroscience at University College London, author of “Optically Interacting Minds.” He observed: “To come to an optimal joint decision, individuals must share information with each other and, importantly, weigh that information by its reliability.”

Think of your last group decision. Did the group consist of capable, knowledgeable, eager listeners with diverse viewpoints and life experiences, and a shared commitment to evidence-based decision-making and open communication? Probably not, but sub-optimal group behavior and decisions can occur even in the best of groups. In their Harvard Business Review article “Making Dumb Groups Smarter,” Sunstein and Hastie suggest that botched informational signals and reputational pressures are to blame: “Groups err for two main reasons. The first involves informational signals. Naturally enough, people learn from one another; the problem is that groups often go wrong when some members receive incorrect signals from other members. The second involves reputational pressures, which lead people to silence themselves or change their views in order to avoid some penalty-often, merely the disapproval of others. But if those others have special authority or wield power, their disapproval can produce serious personal consequences.”

On the topic of “special authority” interfering with optimal decision-making, I recently heard a clever term used to describe a form of influence that is often at work in a decision making group. The HiPPO (“Highest Paid Person’s Opinion”) effect refers to the unfortunate tendency for lower-paid employees to defer to higher-paid employees in group decision-making situations. Not too surprising, then that the first item on Sunstein and Hastie’s list of things to do to make groups wiser is “Silence the Leader.”

So exactly how do botched informational signals and reputational pressures lead groups into making poor decisions? Sunstein and Hastie again:

  • Groups do not merely fail to correct the errors of their members; they amplify them.
  • Groups fall victim to cascade effects, as group members follow the statements and actions of those who spoke or acted first.
  • They become polarized, taking up positions more extreme than those they held before deliberations.
  • They focus on what everybody knows already-and thus don’t take into account critical information that only one or a few people have.

Next time you are on the verge of convening a roomful of people to make a decision, stop and think about what it takes to position any group to make effective decisions. You might be better off taking Julie Minson’s advice, electing to choose just one other person to partner with you to make the decision instead. Seldom Seen Smith, the river guide character in The Monkey Wrench Game by Edward Abbey, was obviously a skeptic when it came to group decision-making, but he may have been on to something when he declared:

“One man alone can be pretty dumb sometimes, but for real bona fide stupidity, there ain’t nothin’ can beat teamwork.”

How Customers Really Think About Insurance

Since presenting on the topic and writing an article for the Chartered Insurance Institute’s’ Journal, I’ve continued to hear a demand for more understanding of Behavioral Economics (BE). It appears the majority of insurers have delegated the challenge of understanding behavioral economics to their risk and pricing teams, and few are engaging actively with their marketing and customer insight teams.

I think this is a missed opportunity, not just for better compliance with Financial Conduct Authority (FCA) expectations, but also for the commercial gains to be made from better-designed communications.

That said, I suspect the majority of you have at least heard of BE. In recent years, the success of popular books on the subject has ensured plenty of media coverage and social media debate on implications and appropriateness. Easy-to-read books, as introductions to the subject, have included “Nudge” by Richard Thaler and Cass Sunstein. More comprehensive and challenging is a classic text like“Thinking Fast and Slow” by Daniel Kahnemann. Both are well worth reading, and there are now many others to choose from.

What makes this subject of greater relevance to the financial services industry is the influence of behavioral economics on the thinking of both the UK government and the FCA. Government policy is being influenced by the work of its “nudge unit.” Meanwhile, the FCA has said that it expects companies to consider how their customers actually make decisions.

So what exactly does behavioral economics teach us with regard to how people make decisions? There are numerous experts and many slightly different approaches, but I believe the categorization proposed by the FCA is a good place to start. In its first occasional paper on the subject, the FCA proposed the following list of 10 behavioral biases:

  1. Present bias. This is an overvaluing of the present compared with the future. This might be manifest in choices that look like immediate gratification or in ones that look like procrastination. An insurance example might be customers only considering premium cost now, not a full comparison of the cover provided for the future.
  2. Reference dependence and loss aversion. Loss aversion can be seen in tests where people will consistently seek to avoid a loss that is certain, even if having to take a gamble or pay more to do so. Reference dependence is the assessing gains or losses in comparison with a subjective reference point. Retailers use this a lot. I’m sure you’ve experienced supermarket product pricing manipulated to make a relatively expensive choice look more mid-market by comparing that choice with higher, “dummy prices.” For an insurance example: Customers might make different decisions if just shown the costs of monthly or annual premiums on a renewal letter, as opposed to seeing a comparison with last year’s premium, as well.
  3. Regret and other emotions. Here we are dealing with irrational actions to avoid experiencing such negative emotions in the future. This might involve procrastinating on important decisions, like being checked out by a doctor, or willingness to pay for products that avoid decision making (like premium products promising to cover everything you need). A worrying example for insurers is consumers’ unwillingness to engage with a need for life insurance, because of their discomfort with imagining the death of a loved one.
  4. Overconfidence. That is, overconfidence about the likelihood of future events or our abilities, or rationalizing past events (with the benefit of hindsight). For instance, almost all drivers believe they are above average. Another example is what’s called the planning fallacy, where most people consistently underestimate how long it will take them to get something done. Within insurance customers, we can see this bias at work in consistent under-estimating of cover needed or assuming an ability to self-insure or financially cope without protection.
  5. Over-extrapolation. Here we are dealing with making predictions on the basis of too few data points. A classic example is in the behavior of most investors. Most people will underestimate the level of uncertainty and buy or sell shares on the basis of insufficient data to make a robust forecast. One could say that the same behavior is also exhibited in consumers’ use of insurance comparison sites. Undue importance can be given to simply the cheapest price or known brands, to shortcut decision-making time, rather than make a rational comparison of cover, service, recommendations, etc.
  6. Projection bias. This is the expectation that your current feelings, attitudes and preferences will continue into the future. So, you underestimate the potential for change. A classic example of this is the effect of the weather on sales of houses and cars. The feel of a house, or looks of a car, on a sunny day is projected into the future and bought without sufficient investigation — leading to higher sales on sunny days. An insurance example could be seen in the low engagement of the working population with critical illness cover or health insurance, because of a projection of current good health into the future.
  7. Mental accounting and narrow framing. This is the behavior whereby people treat money or assets differently according to the purpose assigned to them, and consider such decisions in isolation rather than look at the overall impact. For instance, people not paying off debts while putting funds into savings accounts with lower interest rates. An insurance example is perhaps the estimates made of sum insured, which are more driven by impact on regular premium and budget allocated, rather than purchases made and value of possessions.
  8. Framing, salience and limited attention. This means reacting differently to essentially the same choice, because it is presented differently, partly because of limited attention to all but the most salient points. For example, shoppers are more likely to buy meat labelled 75% lean than meat labeled 25% fat. For an insurance example, consider the different responses to financial statements when the same information is simply presented in different ways. Simpler presentation that causes the most important information to be salient can change engagement and action.
  9. Decision-making “rules of thumb,” or heuristics. This is the tendency to simplify complex decisions by choosing more familiar, status quo or less ambitious questions instead. An example is where interviewers will choose candidates most like known colleagues or be swayed by stereotypes. For insurance, one sees customers simplifying many decisions in this way, for instance: “Is my pension fund performing well, and do I need to increase my contributions to achieve my goal?” can be simplified to: “Is anything wrong, and do they say I have to do anything now?”
  10. Persuasion and social influence. This behavior includes being persuaded because a seller is likable or comes across as a good person. There are examples of people being unduly swayed by apparent social norms, like increases in recycling because local government shares the percentage of others in your area who are recycling. For insurance, the change in consumers assuming that they “should” use comparison sites to shop around, because of an impression that everyone does so now, has been influenced by consistent advertising on TV and other media. It is interesting to see this reflected in customers who make a buying decision first, then find some evidence on a comparison site to justify the choice afterward.

There is much more I could share on BE, but this is a long enough post that I hope you can judge your interest in this topic. Do comment if you’d like to see more on this topic, especially how to apply this theory in practice.