Download

5 Ways to Flub a Big Decision

Research into 2,500 major failures finds that many big decisions are doomed even before they come off the drawing board. Why?

sixthings

Business is a contact sport. Some companies win, while others lose. That won’t change. There is no way to guarantee success. Make the best decisions you can, then fight the battle in the marketplace.

Yet research into more than 2,500 large corporate failures that Paul Carroll and I did found that many big decisions are doomed as they come off the drawing board—before first contact with the competition.

Why?

The short answer is that humans are far from rational in their planning and decision-making. Psychological and anthropological studies going back decades, including those of Solomon AschStanley MilgramIrving JanisDonald Brown and, more recently, Dan Ariely, consistently demonstrate that even the smartest among us face huge impediments when making complicated decisions, such as those involved in setting strategy. In other words, humans are hard-wired to come up with bad decisions. Formulating good ones is very difficult because of five natural tendencies:

1. Fallacious assumptions: If “point of view is worth 80 IQ points,” as Alan Kay says, people often start out in a deep hole. One problem is the anchoring bias, where we subconsciously tend to work from whatever spreadsheet, forecast or other formulation we’re presented. We tend to tinker rather than question whether the assumptions are right or the ideas are even worth considering. Even when we know a situation requires more sophisticated analysis, it’s hard for us to dislodge the anchors.

See Also: Better Way to Think About Leadership

Another strike against expansive thinking is what psychologists call the survivorship bias: We remember what happened; we don’t remember what didn’t happen. We are encouraged to take risks in business, because we read about those who made “bet the company” decisions and reaped fortunes—and don’t read about those who never quite made the big time because they made “bet the company” decisions and lost.

2. Premature closure: People home in on an answer prematurely, long before we evaluate all information. We get a first impression of an idea in much the same way we get a first impression of a person. Even when people are trained to withhold judgment, they find themselves evaluating information as they go along, forming a tentative conclusion early in the process. Premature conclusions, like first impressions, are hard to reverse.

A study of analysts in the intelligence community, for instance, found that, despite their extensive training, analysts tended to come to a conclusion very quickly and then “fit the facts” to that conclusion. A study of clinical psychologists found that they formed diagnoses relatively rapidly and that additional information didn’t improve those diagnoses.

3. Confirmation bias: Once people start moving toward an answer, they look to confirm that their answer is right, rather than hold open the possibility that they’re wrong. Although science is supposed to be the most rational of endeavors, it constantly demonstrates confirmation bias. Ian Mitroff’s The Subjective Side of Science shows at great length how scientists who had formulated theories about the origins of the Moon refused to capitulate when the moon rocks brought back by Apollo 11 disproved their theories; the scientists merely tinkered with their theories to try to skirt the new evidence.

Max Planck, the eminent physicist, said scientists never do give up their biases, even when they are discredited. The scientists just slowly die off, making room for younger scientists, who didn’t grow up with the errant biases. Planck could just as easily have been describing most business people.

4. Groupthink: People conform to the wishes of the group, especially if there is a strong person in the leadership role, rather than ask tough questions. Our psyches lead us to go along with our peers and to conform, in particular, to the wishes of authority figures. Numerous psychological experiments show that humans will go along with the group to surprising degrees.

From a business standpoint, ample research, supported by numerous examples, suggest that even senior executives, as bright and decisive as they typically are, may value their standing with their peers and bosses so highly that they’ll bend to the group’s wishes—especially when the subject is complicated and the answers aren’t clear, as is always the case in strategy setting.

5. Failure to learn from past mistakes: People tend to explain away their mistakes rather than to acknowledge their errors, making it impossible to learn from them. Experts are actually more likely to suffer from overconfidence than the rest of the world. After all, they’re experts. Studies have found that people across all cultures tend to think highly of themselves even if they shouldn’t. They also blame problems on bad luck rather than take responsibility and learn from failures. Our rivals may succeed through good luck, but not us. We earned our way to the top.

While it’s been widely found that some 70% of corporate takeovers hurt the stock-market value of the acquiring company, studies find that roughly three-quarters of executives report that takeovers they were involved in had been successes. The really aware decision makers (the sort who read articles like this one) realize the limitations they face. So, they redouble their efforts, insisting on greater vigilance and deeper analysis. The problem is that that isn’t enough.

As the long history of corporate failures show, vigilant and analytical executives can still come up with demonstrably bad strategies. The solution is not to just be more careful. Accept that the tendency toward decision-making errors is deeply ingrained and adopt devil’s advocates and other explicit mechanisms to counter those tendencies.

Could Location Data Be the Golden Thread?

In a world of confusing, unstructured data, do location coordinates provide an anchor for all the new information becoming available?

sixthings
In insurance, location is everything. It helps insurers understand where the risks are, whether there has been accidental (or deliberate) accumulation of risk and where their customers are. Location helps insurers optimize their distribution strategy, their claims services deployment, their supply chain and even how they market and advertise their services. The technologies of location intelligence and weather prediction also naturally converge to help anticipate the impact of hail and storm, and allow insurers to proactively advise their policyholders to act (although only half of policyholders who are warned of an impending event actually take action). Bringing weather and location information together creates an environment where insurers change from being reactive to being proactive. New touch points are also created with policyholders (as opposed to a single annual request for premium), with the potential both to add value to the insurance proposition and also to improve loyalty Some might reasonably argue that weather forecasts are already available from the news. Perhaps one task for insurers going forward is to create a more effective interlock between weather forecasting, policyholder behavior and premium reduction? Increasingly, location is being seen as a subset of big data rather than a stand-alone technology. In a world of data where 80% is unstructured and uncertain, do the coordinates of location provide some sort of anchor for all the new information becoming available? After all, what could be more certain than where something or someone is physically located? Imagine if location data became the golden thread that tied all insurance information together? For many, location information still equates to mapping and "flat" visualizations. It is fundamentally descriptive in nature, albeit providing effective illustrations of potentially complex issues. As location intelligence increasingly aligns to predictive and cognitive analytics, perhaps the "power of place" may start to assume new meaning? Location data is becoming increasingly pervasive in the insurance industry. The connected car, the connected home and the connected person all have a location component. Perhaps the future for insurers isn’t just around being "data-driven" but "location-data-driven"?

Tony Boobier

Profile picture for user TonyBoobier

Tony Boobier

Tony Boobier is a former worldwide insurance executive at IBM focusing on analytics and is now operating as an independent writer and consultant. He entered the insurance industry 30 years ago. After working for carriers and intermediaries in customer-facing operational roles, he crossed over to the world of technology in 2006.

How to Turbocharge a Marketing Budget

Insurers, especially in auto, should emulate Amazon and use their marketing to generate a new revenue stream ... from non-buyers.

sixthings
Competition in the auto insurance industry is at an all-time high, with carriers engaged in an aggressive battle to acquire new customers and grow market share. Massive investments are being made in marketing and advertising to expand brand awareness and drive customer acquisition. This marketing arms race has led to annual ad spending growth of 15% to 20% per year, with total auto insurance advertising spending, by some accounts, eclipsing $8 billion per year. These marketing investments are having a measurable impact at the top of the sales funnel. The total number of active shoppers looking for auto insurance quotes every year is increasing. Nearly 40% of all insurance policyholders are actively shopping to find a better deal. Likewise, brand-building efforts are having the desired effect on consumer behavior. Consumers are now more likely to begin their research by searching for brand names or visiting a brand advertiser’s site. This "brand shift" in search behavior from generic keywords (“auto insurance quotes”) to brand-specific keywords (“GEICO,” “State Farm”) is a trend. Many suspect that this trend was, in part, behind Google’s recent decision to eliminate ad placements from the right-hand side of the search results page. This move by Google increases the prominence of a few leading advertisers who can pay for premium positioning on generic terms, enabling Google to generate more revenue from the declining share of non-branded insurance searches. See Also: From Marketing Myths to Truths The corresponding impact of these marketing investments at the bottom of the funnel is not as clear. In fact, conversion rates across the industry are actually on the decline. J.D. Power’s 2015 U.S. Insurance Shopping Study summarized the market dynamics and customer acquisition challenges facing marketers:
  • 39% of current auto insurance policy holders are shopping, up from 32% in 2013. With close to 200 million policy holding households in the U.S., this translates into almost 80 million shoppers.
  • But only 29% of shoppers switch carriers, down from 37%.
  • And the average industry close rate for auto insurance carriers has declined to 13% from 18% in 2013.
So, on the one hand, 80 million active shoppers represent a large and attractive market of consumers that are engaging auto insurance brands. On the other hand, the total number of shoppers who switch to a new carrier is declining. And the average overall close rate across the industry is just 13%. In essence, the size of the haystack is growing, while the needle is seemingly getting smaller. These are daunting statistics for insurance marketers and reinforce the importance of marketing efficiency. Additionally, the Google AdWords changes puts pressure on all but the largest brands to find creative ways to optimize budget and stay in front of shoppers. For larger brands, the increased competition for a smaller number of above-the-fold placements will likely inflate cost-per-click (CPC) pricing. In this challenging landscape, marketers need new strategies to optimize the entire sales funnel to get the most out of their marketing dollars and maintain competitive relevance. Several innovative carriers have begun to embrace a creative solution to this challenge by recognizing that active shoppers on their sites are a highly valuable asset that can be monetized. These carriers are monetizing shoppers by presenting advertising listings for other carriers as part of the quote process. These carriers deliver a significantly improved and streamlined user experience – helping shoppers compare and find the right product more quickly – while also generating substantial incremental revenue. This simple model has a dramatic effect on marketing efficiency by increasing the percentage of insurance shoppers who can be monetized from just the 5% to 15% (who buy a policy) to more than 50% (those who buy a policy or choose to compare rates on other carrier sites). The marketing departments at these carriers have found an entirely new revenue stream that can be funneled straight into the marketing budget to turbocharge advertising and customer acquisition. This new way of thinking might sound radical for some auto insurance traditionalists. Yet nothing about this strategy is radical for today’s consumers. Quite the contrary, consumers have come to expect choice and comparison when shopping online. The convergence of several consumer trends is fueling this new monetization strategy: Growth of Online Channel The first major trend is the growth of the online channel for auto insurance. Andy Serowitz’s recent article, Demographics and P&C Insurance, did an excellent job highlighting several macro trends affecting the industry, including the growth of the online channel. The online shift that was initiated by direct carriers like GEICO and Progressive has helped establish the Internet as an important acquisition channel for all carriers; the number of policies sold online has increased 400% over the last eight years. Although agents still play a significant role, the online channel now represents 20% to 25% of total insurance sales transactions and influences more than 50% of transactions. Furthermore, the younger demographic found online tends to be more price-sensitive and less brand-loyal, seeking quick results with minimal friction. Marketers have the opportunity to develop creative solutions that are better aligned with the unique perspectives and preferences of this group. Comparison Shopping Comparison shopping has simply become a way of life on the internet. Auto insurance shoppers want the ability to compare. Shoppers evaluate an average of 4.5 brands and receive an average of 3.1 quotes, a figure that increases to 3.7 for online shoppers. Insurance carriers need to embrace this reality and deliver a better consumer experience. Blurring Lines Between Commerce and Search The last few years have seen a blurring of the lines between traditional search providers and commerce companies. Amazon is an excellent example of a commerce brand that has emerged as a viable search alternative. In 2009, Amazon was the starting point for 18% of shoppers searching for products. In 2015, that figure reached 44%. Simultaneously, Amazon also built an impressive ad business. For several years, Amazon has been serving sponsored ad listings for other e-commerce sites alongside its own offerings. When Amazon cannot convert a shopper through a direct purchase, it earns ad revenue by sending the shopper elsewhere. Amazon’s ad business now generates an impressive $500 million per year that it reinvests in customer acquisition and other growth initiatives. Insurance carriers could benefit greatly by taking a page from Amazon’s playbook and recognize that consumers are coming to their sites to initiate a more targeted or vertical-specific search. These shoppers have high purchase intent, which represents a highly valuable marketing asset with significant untapped value. For years, unlocking the media value of this type of search has been the exclusive domain of the search engines. But commerce brands now have the opportunity to play a leading role here that improves consumer experience and delivers tangible economic rewards. So if the consumer need is there and the economics are significant, why haven’t more carriers already embraced this strategy? For most carriers, there are two common objections to overcome: 1) Cannibalization of policy revenue is an obvious concern for most carriers that consider monetizing more shoppers. The goal is not to replace policy revenue with ad revenue. But performance results show that cannibalization can be minimized. To begin, most carriers start by monetizing low-risk, “non-served” customer segments, including non-covered geographies or consumers who fall outside of a carrier’s underwriting parameters. These non-served segments offer pure revenue upside. Within “served markets,” carriers find the impact to policy revenue to be negligible. Many carriers, in fact, experience an increase in conversions as greater openness builds trust with consumers and translates into more policy sales. This is also where advertising technology providers like MediaAlpha play a critical role. Technology tools now exist that empower carriers to take full control of when and how ad listings are shown based upon internal metrics and desired audiences. This ensures that ads are only shown if the economic benefit of showing an ad significantly outweighs any potential impact to policy revenue. The result is minimal cannibalization that is typically offset 3-5X by corresponding ad revenue. 2) The second-most common concern is the perceived negative brand impact from showing ad listings for other carriers. Again, data collected on consumer reaction and preferences indicates no negative impact on brand perception. Shoppers are presented ads simultaneously with a quote (or instead of a quote), so the ad experience is in the context of shopping. This experience is in line with consumer expectations, and there is no data that suggests it creates confusion or negative sentiment. On the contrary, brands get a powerful opportunity to deliver value where they previously could not. Without ad listings, 85% to 95% of shoppers visiting a carrier website will end their quote search unsuccessfully. The brand experience for that individual shopper is unfulfilled. The shopper is on her own to leave the site and restart a search elsewhere. By providing these shoppers with alternative listings, carriers now have a meaningful and profitable way to improve upon this poor consumer experience and lift their brand perception in the process. For auto insurance carriers, the value proposition of monetizing shoppers is clear. In a competitive marketplace that necessitates strong marketing efficiency, the ability to generate significant revenue from non-purchasing consumers is a highly compelling economic opportunity. By unlocking this new revenue stream, incremental revenue can immediately be reinvested into more targeted, higher-value consumer segments. Brands can recapture a significant percentage of marketing inefficiency and redeploy those dollars to more effectively acquire the right customers.

Jeff Navach

Profile picture for user JeffNavach

Jeff Navach

Jeff Navach is the vice president of marketing for MediaAlpha, where he is responsible for leading all marketing activities, including customer insights, brand development and demand generation. MediaAlpha operates the leading technology platforms for real-time buying and selling of high-intent, vertical-specific search media.

Employers: Don't Pay for 'Never Events'

A key way to save on healthcare costs (while helping employees): Use hospitals that take responsibility for the costs of "never events."

sixthings
The initial installment in this series expressed concern that too narrow a focus on wellness diverts companies’ attention from more compelling opportunities to save money and improve employee health outcomes. This installment starts with a related  shocker: By far the most costly inpatient diagnosis code, septicemia, is not addressed by any wellness program in the country. Here is the government’s official ranking: pic1 Septicemia due to contamination, which is just one of many avoidable hospital errors, shows that there is a major opportunity to save money by directing employees to hospitals that are most likely to avoid errors. To back their commitment to avoiding errors, such hospitals also usually offer a “never-events” policy, meaning they agree not to be paid for events that are their fault and that should never happen. So your employees will be more likely to have a safer experience—and, if they don’t, you don’t pay. (To be fair to hospitals, not all septicemia is contracted there. At the same time, many blood infections contracted in hospitals are not primary-coded as septicemia.) The opportunity for you would be to highlight hospitals within your network that agree to a list of specific items that make up a never-events policy. “Highlighting” might include waived deductibles or co-pays for employees who choose highlighted hospitals over others, thus noodging more employees to safer hospitals. What is included in a “never-events” policy? The Leapfrog Group, which is the nation’s leading arbiter of hospital quality, has a policy that requires hospitals to undertake five steps following a never-event:
  • apologize to the patient;
  • report the event;
  • perform a root-cause analysis;
  • waive costs directly related to the event;
  • provide a copy of the hospital’s policy on never-events to patients and payers upon request.
Examples of never-events culled from this complete list are:
  • Certain hospital-acquired infections/septicemia
  • Wrong-site/wrong surgery/wrong patient
  • Objects left in body
  • Wrong blood type administered
  • Serious medication errors
  • Air embolisms
  • Contaminated or misused drugs/devices
  • Death
Any given never-event is rare, but in total 5% to 10% of inpatients suffer a significant adverse event during their stays. The consequences – in cost, suffering and lost productivity – could be substantial. No need to take my word for the cost: The Leapfrog Group provides a Hidden Surcharge Calculator that can be used to estimate the financial impact of hospital errors. Do hospitals in your network have a never-events policy? At the very minimum, by default they have such a policy for Medicare, which doesn’t pay extra for certain never-events. Medicare still pays the standard diagnosis-related group (DRG) case rate but doesn’t reimburse “outliers” separately if the added hospital time was caused by a never-event. Obviously, the DRG rates are set a little higher to begin with. So hospitals that do a good job – typically Leapfrog-rated “A” and “B” in the Hospital Safety Score report – embrace this payment scheme, while others would have been better off getting paid the old-fashioned way. Some hospital systems extend this policy to employers – or will, if you or your carrier ask and you are a large enough customer, and their quality is high enough that the economics work out for them. Leapfrog A-rated hospitals are therefore the most likely to be willing to negotiate a never-events policy for your employees. These hospitals aren’t necessarily the name brands in your marketplace. In Washington, for example, Virginia Mason Medical Center (VMMC) is the hospital consistently earning the highest Leapfrog scores. Not surprisingly, it was among the first hospitals in the country to offer a never-events policy to employers. The hospital was highlighted in Cracking Health Costs for its many best practices. VMMC is one of the few hospitals that Walmart, Lowes and other jumbo employers will actually fly employees into, to ensure the best care. And yet you’ve never heard of VMMC, have you? So what should you do? You still need to offer a wide local hospital network to employees. It simply isn’t worth the inevitable pushback to require a narrow hospital network. Instead, just ask existing network hospitals to offer you a never-events policy, or let you become part of a policy they already offer to employers. There is plenty of precedent of this. For years, the state of Maine has tied hospital payments for its own employees to quality and safety standards, including Leapfrog standards. And Maine, despite being among the poorest states, consistently ranks #1 or #2 in Leapfrog quality ratings. Coincidence? I think not. Particularly if you can contract in conjunction with your local business coalition, you have the chance to influence hospital safety, just like Maine did. Additionally, you can follow the lead of those other jumbo employers named above and contract with the country’s safest hospitals for any employees who wish to make the trip. Yes, I know, you aren’t a “jumbo employer.” But a firm named Edison Health helps small employers with the contracting and logistics of such arrangements. It also offers a tool, validated by the Validation Institute, to help you figure out if medical travel would be a worthwhile endeavor for you. This type of contracting requires a little work on your end, but if all you want is discounts and coverage and don’t want to put in the work, you could punt to an exchange. On the other hand, you self-administer your health benefit for one good reason: to influence employee health, and this is a clear opportunity to do so. By contrast, wellness is a LOT of work…and likely increases your costs in the short run. Wellness will take years to pay dividends, if any, whereas you can start influencing employee hospital choice immediately.

3 Tips for Improving Healthcare Literacy

Hint: Using newsletters to communicate your new cost-containment solution will not work because your employees will not read them.

sixthings
Today, innovative cost-containment solutions are helping employers "curb" the increasing cost of healthcare.  However, these solutions are only as good as the education tied to them.  A solution without effective education is useless and can even be costly. Employee education has been a sticking point in the employee benefits world.  Many employers haven't done a good job educating employees and have thus missed the boat on containing costs. According to a 2003 assessment (I know, old!) by the U.S. Department of Education, only 12% of U.S. adults have a proficient level of healthcare literacy. That is scary. The days of educating the workforce about what they have, how much it costs and how to sign up are long gone. Stop repeating the same message year after year. The focus of your education has to be around improving the healthcare literacy of your workforce. The good news is that there are consultants around the country creating some amazing messages. Folks like Jim Millaway, Gary Becker and Al Lewis are innovating the way benefit education is provided, helping employers reduce the cost of health insurance. With that, let's look at three employee education tips that can help you contain costs. See Also: On Air Traffic Control and Health Costs
  1. Effective Education Is a Year-long Process
If your education strategy consists of nothing more than the annual open enrollment meeting, we need to talk and please keep reading! By the time your employees walk out of the meeting, they will forget 90% of what they heard; especially how to use a new cost-containment tool effectively. To ensure the new solution is a success, you have to keep the message in front of your employees all year long.
  1. Make Sure Your Message Helps You Accomplish Your Goal
Remember, your goal is to "curb" or even reduce the cost of your health insurance, so strategic education has to be a part of your long-term plan. Do not rely on the communication provided by carriers and vendors, as they are often too vague and provide information most of your employees already know (e.g. your smokers already know they should quit as their doctor has been telling them for years). To achieve your goal, you need to make sure your education aligns with the objective, improving health literacy. Focus on the kind of education that will help your employees help your medical plan save money. Strategic education is the wave of the future. Innovative solutions like Quizzify are giving employees the opportunity to become stewards of their own healthcare journey, helping both their checkbook and the bottom line of their employer.
  1. Your Message Has to Be Clear and To-the-Point
Trying to find the right avenue for educating the workforce is not easy. However, using newsletters and brochures to communicate your new cost-containment solution will not work because your employees will not read them. One way to get your message across effectively is through video. Videos only require employees to hit "play" and are short and to-the-point, and can be customized to convey the message you want. Employees like the videos because little time and effort is wasted in watching and the employer is able to craft the message (with help) to best meet its objective. A video campaign can be a very effective way of improving the health literacy of your workforce through short, focused messages. Crafting the right educational message is hard work and requires time and effort. But if it is done well, you will not only be happy about your new cost-containment solution, you will create a highly educated and empowered workforce that will have a positive impact on your bottom line.

Andy Neary

Profile picture for user AndyNeary

Andy Neary

Andy Neary is a healthcare strategist with VolkBell in Longmont, CO. Neary has more than 14 years of experience in helping employers affect the rising cost of healthcare through innovative strategies. His strategies help employers cut through the complexity of a broken healthcare system.

New Attack Vector for Cyber Thieves

Thieves are hacking personal email to pose as an authority figure and dupe a subordinate into doing something quickly, without questions.

It has become commonplace for senior executives to use free Web mail, especially Gmail, interchangeably with corporate email. This has given rise to a type of scam in which a thief manipulates email accounts. The goal: impersonate an authority figure to get a subordinate to do something quickly, without asking questions. The FBI calls this “CEO fraud,” and a surge of these capers has resulted in scammers stealing a stunning $750 million from more than 7,000 U.S. companies from October 2013 through August 2015. Here is an example where the scammer targets an attorney from a big city in the Northeast. Attack vector: The scammer gathers intelligence about real estate transactions handled by an attorney and drills down on a specific deal in which the law firm is handling the purchase of a $450,000 home for a client. The scammer learns this attorney is in the habit of using his personal Gmail account interchangeably with his law firm’s email. As the transaction approaches the final step, the attorney’s paralegal receives a spoofed email that appears to come from her boss. She instantly follows a directive to cancel a check for $450,000 that she is about to mail and instead wires the funds into an account designated by the scammer. More video: Scammers exploit trust in Google’s platform Distinctive technique: The funds initially get routed to another law firm in the Southwest. A subordinate in this law firm also appears to have been spoofed by the scammer to be prepared to move funds once again, this time into an account set up in a U.S. branch office of Sumitomo Bank, a giant global institution with headquarters in Tokyo. “At this point, it is not likely the $450,000 will ever be recovered,” says IDT911 Chief Privacy Officer Eduard Goodman. “Once a transfer like this is made, you can’t really unring that bell.” Wider implications: U.S. consumers are well protected by federal law, and banks usually will reimburse individual consumers victimized by cyber criminals. However, banks are under no legal obligation to offer any relief to businesses, large or small, that have been tricked like this. Most of the $750 million lost in documented cases of CEO fraud has most likely been absorbed by the duped business entities. Infographic: More Americans living with data insecurity Excerpts from ThirdCertainty’s interview with Goodman. (Answers edited for length and clarity.) 3C: Businesses are losing one heck of a lot of money to CEO fraud.
Eduard Goodman, IDT911 chief privacy officer
Goodman: Yeah, absolutely. This one was for about $450,000. There is another woman with a ballet company who recently lost about $100,000. It’s significant chunks, let’s put it that way. And because this is happening in a business setting, it’s a little bit different in that your bank won’t stand behind you. It’s caveat emptor. There is no consumer protection. When something like this happens to your business, you’re out of luck. 3C: Why aren’t suspicious transactions flagged more often? Goodman: The government will tend to go after companies for anything that may have to do with consumer violations. But when businesses impact other businesses, the government doesn’t do a damn thing, even if the victim is a really small business and they’re essentially consumers in and of themselves. Banks have that unfair advantage to say, ‘Well, sorry, should have flagged it, but we just process it for you.’ 3C: So by using free Web mail this attorney sort of invited spoofing? Goodman: He kind of comingled accounts, that’s the thing. He had his law firm’s email, and he also had a personal Gmail account. He would send emails from both accounts. That is something that has become a very common practice. He probably had previously emailed himself something from his actual work account into his Gmail account. This scammer probably got into his Gmail account, and then made the connection to his law firm account. Then it was off to the races. The paralegal gets the wire transfer request from an email that’s very close to an authentic law firm email except there’s an extra letter in the domain name. It looks very credible. 3C: Could this have been avoided? Goodman. Yes, by taking the extra 45 seconds to make a phone call. Pick up the phone and verify things instead of getting caught up in the workday.

Byron Acohido

Profile picture for user byronacohido

Byron Acohido

Byron Acohido is a business journalist who has been writing about cybersecurity and privacy since 2004, and currently blogs at LastWatchdog.com.

How Bad Is Insurance Fraud Really?

For starters: Insurance fraud in the U.S. is estimated to be at least $80 billion a year.

sixthings
The-World-of-Insurance-Fraud-Infographic This infographic was originally published on Easy Life Cover.

Eamonn Freeman

Profile picture for user EamonnFreeman

Eamonn Freeman

Eamonn Freeman is managing director of Easy Life Cover, a life insurance company based in Ireland. He regularly researches and creates content relatable to his industry ranging from infographics to blog posts.

Data Science: Methods Matter

The details of exploratory analysis can be tedious, yet they are the crux of the genius involved in data science project methodology.

sixthings
When data analytics uses simple formulas, much conjecture and an arbitrary methodology, it often fails in what it was designed to do —give accurate answers to pressing questions. So, at Majesco, we pursue a proven data science methodology in an attempt to lower the risk of misapplying data and to improve predictive results. In Methods Matter, Part 1, we set the stage for explaining the Majesco Data Science Project Lifecycle. The goal is to give insurance organizations a picture of the methodology that goes into data science. We discussed CRISP-DM and the opening phase of the life cycle, project design. In Part 2, we will be discussing the heart of the life cycle — the data itself. To do that, we’ll take an in-depth look at two central steps: building a data set, and exploratory data analysis. These two steps compose the phase that is  extremely critical for project success, and they illustrate why data analytics is more complex than many insurers realize. Building a Data Set Building a data set, in one way, is no different than gathering evidence to solve a mystery or a criminal case. The best case will be built with verifiable evidence. The best evidence will be gathered by paying attention to the right clues. There will also almost never be just one piece of evidence used to build a case, but a complete set of gathered evidence — a data set. It’s the data scientist’s job to ask, “Which data holds the best evidence to prove our case right or wrong?” Data scientists will survey the client or internal resources for available in-house data, then discuss obtaining additional external data to complete the set. This search for external data is more prevalent now than previously. The growth of external data sources and their value to the analytics process has ballooned with an increase in mobile data, images, telematics and sensor availability. A typical data set might include, for example, typical external sources such as credit file data from credit reporting agencies and internal policy and claims data. This type of information is commonly used by actuaries in pricing models and is contained in state filings with insurance regulators. Choosing what features go into the data set is the result of dozens of questions and some close inspection. The task is to find the elements or features of the data set that have real value in answering the questions the insurer needs to answer. In-house data, for example, might include premiums, number of exposures, new and renewal policies and more. The external credit data may include information such as number of public records, number of mortgage accounts, number of accounts that are 30-plus days past due, among others. The goal at this point is to make sure that the data is as clean as possible. A target variable of interest might be something like frequency of claims, severity of claims or loss ratio. This step is many times performed by in-house resources, insurance data analysts familiar with the organization’s available data or external consultants. At all points along the way, the data scientist is reviewing the data source’s suitability and integrity. An experienced analyst will often quickly discern the character and quality of the data by asking, “Does the number of policies look correct for the size of the book of business? Does the average number of exposures per policy look correct? Does the overall loss ratio seem correct? Does the number of new and renewal policies look correct? Are there an unusually high number of missing or unexpected values in the data fields? Is there an apparent reason for something to look out of order? If not, how can the data fields be corrected? If they can’t be corrected, are the data issues so important that these fields should be dropped from the data set? Even further, is the data so problematic that the whole data set should be redesigned or the whole analytics project should be scrapped? It shouldn’t be overlooked that there is more value in identifying problematic issues early, than in a completed data science project where inaccurate or incomplete data was used. Scrapping data sets or even whole projects at this point will save wasted time and effort. Once the data set has been built, it is time for an in-depth analysis that steps closer toward solution development. Exploratory Data Analysis Exploratory data analysis takes the newly minted data set and begins to do something with it — “poking it” with measurements and variables to see how it might stand up in actual use. The data scientist runs preliminary tests on the “evidence.” The data set is subjected to a deeper look at its collective value. If the percentage of missing values is too large, the feature is probably not a good predictor variable and should be excluded from future analysis. In this phase, it may make sense to create more features, including mathematical transformations for non-linear relationships between the features and the target variable. For non-statisticians, marketing managers and non-analytical staff, the details of exploratory data analysis can be tedious and uninteresting. Yet they are the crux of the genius involved in data science project methodology. Exploratory data analysis is where data becomes useful, so it is a part of the process that can’t be left undone. No matter what one thinks of the mechanics of the process, the preliminary questions and findings can be absolutely fascinating. See Also: The Science (and Art) of Data, Part 1 Questions such as these are common at this stage:
  • Does frequency increase as the number of accounts that are 30-plus days past due increases? Is there a trend?
  • Does severity decrease as the number of mortgage trades decreases? Do these trends make sense?
  • Is the number of claims per policy greater for renewal policies than for new policies? Does this finding make sense? If not, is there an error in the way the data was prepared or in the source data itself?
  • If younger drivers have lower loss ratios, should this be investigated as an error in the data or an anomaly in the business? Some trends will not make any sense, and perhaps these features should be dropped from analysis or the data set redesigned.
The more we look at data sets, the more we realize that the limits to what can be discovered or uncovered are small and growing smaller. Thinking of relationships between personal behavior and buying patterns or between credit patterns and claims can fuel the interest of everyone in the organization. As the details of the evidence begin to gain clarity, the case also begins to come into focus. An apparent “solution” begins to appear, and the data scientist is ready to build it. In Part 3, we’ll look at what is involved in building and testing a data science project solution and how pilots are crucial to confirming project findings.

Denise Garth

Profile picture for user DeniseGarth

Denise Garth

Denise Garth is senior vice president, strategic marketing, responsible for leading marketing, industry relations and innovation in support of Majesco's client-centric strategy.

Protecting Institutions From Cyber Risks

In the wake of FSU’s inadvertent disclosure crisis, a review of the privacy procedures in place at institutions may be in order.

sixthings
Recently, an email glitch at Florida State University resulted in the accidental emailing of alleged misconduct and housing violations to more than 13,000 current and former students. The emails may have revealed the personal information of multiple students and may have disclosed confidentially reported information relating to harassment and alleged sexual assaults. The emails were not sent by anyone on campus but were the result of a technical glitch in the university’s database. The glitch left students confused and, in some cases, frightened and concerned for personal safety. University personnel, including FSU’s Title IX Coordinator, moved quickly to address student concerns, but the proverbial cat was already out of the bag. It will likely be some time before the full consequences of the breach will be known or what the final outcomes will be. In the wake of FSU’s inadvertent disclosure crisis, a review of the privacy procedures in place at an institutional level may be in order to prevent these types of unintended disclosures in the future. It is also important to review the indemnity agreements between the university and third-party service providers such as the database administrator or software provider. Finally, it is important to review how cyber liability insurance may respond in the event of a data breach. Data Privacy Protocols When discussing data privacy protocols, there are three primary areas of concerns. They are how to protect:
  1. Information (e.g., personally identifiable data stored on a server)
  2. Mechanisms/systems that make up the physical housing for the information (e.g., the server itself)
  3. Users accessing the information
A breach of confidential information or data loss can occur at any of the three levels in any number of ways. It is impossible to quantify or evaluate every single manner in which a breach may occur—or how data may be lost. What is important is establishing a protocol that takes into consideration all three areas where a breach may occur. In most cases, it is easy to focus on external threats and user misconduct but overlook the potential for data breach arising from internal system failures or glitches. See Also: How Colleges Can Work With Insurers In developing data security protocols, it is important to engage in a comprehensive threat assessment that includes evaluation of user-based or external potential breach areas as well as the possibility of an equipment failure/glitch. A few areas to consider when reviewing internal data breach/data loss response protocols:
  1. Who is the architect of the protocols? (Are the foxes guarding the hen house?)
  2. Does your protocol comply with statutory requirements and contractual requirements such as PCI compliance, Title IX, HIPAA or other state and federal laws?
  3. Does the protocol specifically address each element of concern identified above? (protection of information, protection of systems, protection of users)
  4. Is there a progressive (tree) notification process? (Do the participants understand where they are in the tree? Does the process include notification to external stakeholders such as legal authorities, insurers, external legal counsel, and crisis management or PR firm?)
  5. Is there strong leadership/executive level buy-in of the protocol?
  6. Is there a training element? (Does it include tabletop or scenario-based practice?)
  7. Is there periodic review of systems and processes to identify and change obsolete protocols and replace key stakeholders in the event of turnover?
Indemnity/Hold Harmless/Limitation of Liability Agreements Vendor service agreements, user license agreements and even software agreements typically include indemnity terms. In most cases, these terms are one-sided, in favor of the seller or service provider. Essentially, the purpose of an indemnity agreement is to contractually shift responsibility for loss/damage from one party (seller) to another party (buyer). These types of agreements vary in scope, strength and enforceability but, in most cases, involve a release or limitation of buyer’s claims or potential claims against the seller. In some cases, the buyer may assume full responsibility for any loss, including an affirmative responsibility to protect and defend the seller in the event of third-party claims. There may also be a limitation on the type and extent of damages a buyer may seek against the seller or service provider—in some cases, the recovery may be limited to the value of contract or agreement. Your institution’s risk management and legal teams should carefully review indemnity terms to fully understand the extent of risk assumed by the institution in executing an agreement with a third party. As part of a comprehensive risk management process, consider limiting acceptance of comprehensive indemnification terms in a contract. This is especially important where the institution is being asked to waive its legal rights or outright indemnify a vendor for the vendor's own negligence, misconduct or product/service failure. A few areas to consider in reviewing contract terms: Indemnity/Hold-Harmless Terms
  1. Who is the indemnitee (recipient of the indemnity) and who is the indemnitor (provider of the indemnity)?
  2. Does the indemnity agreement require one party to indemnify for the other party’s own negligence or misconduct?
  3. Does the indemnity agreement include an obligation to affirmatively defend the indemnitee? Is there is a time limit to accept or reject the defense?
  4. Who is responsible for counsel selection?
  5. Is approval needed to settle claims?
Limitation of Liability
  1. Is there a limitation of liability?
  2. Does the limitation favor the institution or vendor?
  3. Is the limitation reasonable in light of the potential for loss or damage or the nature of the service provided? (Limiting liability to the contract value may not be reasonable if the contract value is low and the risk of loss is high.)
  4. Are there carveouts for negligence or misconduct, or is the limitation of liability intended as the sole remedy?
  5. Does the limitation of liability conflict with the indemnity terms? 
Cyber Liability Insurance In the past few years, cyber liability insurance has gained significant attention among insurance brokers and clients. Cyber insurance refers to a suite of related insurance products that provide various types and levels of protection to insureds that may suffer from data loss or data breach. There are three major components of cyber liability insurance:
  1. First-party coverage for loss or damage to or interruption of the institution’s electronic equipment and electronic services
  2. Third-party coverage for the liability imposed upon the institution for loss or exposure of third-party data; coverage for third parties may include costs for notification, credit monitoring and credit restoration services
  3. Coverage for regulatory requirements as well as for fines and penalties assessed against the institution as part of a covered loss
Unlike some property and casualty insurance products such as general liability or auto insurance, cyber liability insurance is not standardized. Instead, each insurance company issues a customized policy. These policies may vary greatly from insurer to insurer and can often include a la carte coverages that may significantly affect the breadth and scope of coverage. A careful review of institutional and vendor policies is strongly recommended to ensure that the coverage purchased addresses the actual risks of the institution. Some questions to consider when reviewing your cyber liability policy:  See Also: A Better Way to Assess Cyber Risks? First-Party Coverage
  1. How does the policy respond to loss or damage to the institution’s own computer equipment, servers or other hardware components?
  2. How does the policy define a physical loss? (does it include loss of Internet-based platforms such as web portals or only loss to physical components)
  3. Is there a waiting period for business or data interruption? 
Third-Party Coverage
  1. How does the policy respond to breach of confidential or personally identifiable information?
  2. Is coverage provided based on a total number of affected persons or provided on a blanket limit basis?
  3. Is there a minimum/maximum affected person limit?
  4. How is a third-party loss defined? Does it include accidental loss, computer glitches or loss of non-electronic information? (e.g., is there coverage if a laptop containing personally identifiable information is lost? Or if physical records are removed or destroyed?)
  5. Is the coverage triggered only when there is a statutory or governmental notification requirement, or does it cover voluntary notification?
Fines/Penalties
  1. Does the policy include coverage for fines/penalties including payment card industry (PCI) data security standards noncompliance?
  2. Is there a sublimit for the coverage?
  3. Are punitive or exemplary damages included? 
Conclusion It is important to take a thoughtful approach to securing data in all its various forms. An individual protocol alone is not enough to fully secure your institution in the event of a data breach. It is also important to review vendor service agreements, user agreements and software licenses to ensure an understanding of the indemnity/hold-harmless and limitation of liability provisions, which may be present in a current agreement—and which may open up the institution to unintended liability due to the negligence or misconduct of a third party. Finally, it is important to review and understand the types and scope of the institution’s cyber liability coverage—or to consider purchasing this coverage if the institution does not currently maintain coverage.

Mya Almassalha

Profile picture for user MyaAlmassalha

Mya Almassalha

Mya Almassalha joined the Encampus team in early 2016; she brings with her more than a decade of general insurance and risk management expertise, with a strong focus on higher education and organizational risk management.

Risks of Malpractice Claims (Video)

Does spending more on tests reduce malpractice claims? Doctors believe so, but the data is confusing.

sixthings
Healthcare Matters sits down with Dr. Richard Anderson, chairman and CEO of the Doctors Company. In Part 3 of the series, we ask Dr. Anderson about the 2015 BMJ study: Physician spending and subsequent risk of malpractice claims: observational study. What did he find interesting about the results, and did he have any issues with the way the study was conducted?

Erik Leander

Profile picture for user ErikLeander

Erik Leander

Erik Leander is the CIO and CTO at Cunningham Group, with nearly 10 years of experience in the medical liability insurance industry. Since joining Cunningham Group, he has spearheaded new marketing and branding initiatives and been responsible for large-scale projects that have improved customer service and facilitated company growth.


Richard Anderson

Profile picture for user RichardAnderson

Richard Anderson

Richard E. Anderson is chairman and chief executive officer of The Doctors Company, the nation’s largest physician-owned medical malpractice insurer. Anderson was a clinical professor of medicine at the University of California, San Diego, and is past chairman of the Department of Medicine at Scripps Memorial Hospital, where he served as senior oncologist for 18 years.