Download

How to Prevent IRS Issues for Captives

Domiciles have no responsibility to consider federal tax issues when licensing captives -- but they should do so anyway.

A regulator of captive insurance is responsible for many aspects of the business of captive insurance companies. He or she must coordinate the application process for obtaining a license, including the financial analysis and financial examination of each captive insurance company. The regulator is also a key marketing person in promoting the domicile as a favorable place to do business, thus fostering economic development for the state. The captive regulator is not, however, a tax adviser. No statute and regulation in any domestic domicile requires an analysis of the potential tax status of the captives under consideration or under regulation. If the application complies with the stated statutory and regulatory requirements, the regulator must favorably consider the application and allow the new company to be licensed as an insurance company under state law. That new insurance company may not, however, be considered an insurance company under federal tax law. The Internal Revenue Service recently listed captives as one of their annual “Dirty Dozen” tax scams, citing “esoteric or improbable risks for exorbitant premiums.” And at least seven captive managers (and therefore their clients) have been targeted for “promoter” audits, for allegedly promoting abusive tax transactions. Yet all of these captives received a license from a regulator, mostly in the U.S. Obviously these regulators did not consider the pricing of the risks to be transferred to the captive, except perhaps at the macro level. Should the domicile care about the potential tax status of licensed captives? David Provost, Vermont’s Deputy Commissioner of Captive Insurance, has said, “We do not license Section 831(b) captives; we license insurance companies.” While that statement is technically correct, this paper argues that, with respect to small captives, regulators should care about the tax implications of licenses in extreme cases, consistent, of course, with the laws and regulations under which it operates. Small captives, i.e. those with annual premiums of no more than $1.2 million, can elect under section 831(b) of the Internal Revenue Code to have their insurance income exempt from federal taxation. This provision, combined with certain revenue rulings and case law, creates a strong tax and financial planning incentive to form such a captive insurance company. This incentive can lead to an “over-pricing” of premiums being paid to the new captive, to maximize the tax benefits on offer. The premiums may be “over-priced” relative to market rates, even after being adjusted for the breadth of policy form, size and age of the insurance company and, in some cases, the uniqueness of the risk being insured by the captive. But “over-priced” in whose eyes? Insurance regulators are usually more concerned with whether enough premium is being paid to a captive to meet its policy obligations. From that perspective, “too much” premium can never be a bad thing. Indeed, captive statutes and regulations generally use the standard of being “able to meet policy obligations” as the basis of evaluating captive applications or conducting financial reviews. And actuarial studies provided with captive applications generally conclude that “…the level of capitalization plus premiums will provide sufficient funds to cover expected underwriting results.” These actuarial studies do not usually include a rate analysis, by risk, because none is required by captive statute or regulation. Small “831(b)” captives, therefore, may easily satisfy the financial requirements set forth in captive statutes and regulations. If, however, the Internal Revenue Service finds on audit that the premiums paid to that captive are “unreasonable,” then the insured and the captive manager may face additional taxes and penalties, and the captive may be dissolved, to the loss of the domicile. And, as has happened recently, the IRS may believe that a particular captive manager has consistently over-priced the risk being transferred to its captives and may initiate a “promoter” audit, covering all of those captives. Such an action could result in unfavorable publicity to the domiciles that approved those captive applications, regardless of the fact that the regulators were following their own rules and regulations to the letter. It is that risk of broad bad publicity that should encourage regulators to temper the rush to license as many captives as possible. There should be some level of concern for the “reasonableness” of the premiums being paid to the captives. One helpful step would be to change captive statutes or regulations to require that actuarial feasibility studies include a detailed rate analysis. Such an analysis would compare proposed premium rates with those of the marketplace and offer specific justifications for any large deviations from market. (Given the competition among jurisdictions for captive business, such a change would only be possible if every domicile acted together, eliminating the fear that a domicile would lose its competitive edge by acting alone.) Absent such a change, however, regulators still have the power to stop applications that do not pass the “smell test.” Most captive statutes require each applicant to file evidence of the “overall soundness” of its plan of operation, which would logically include its proposed premiums. If the premiums seem unreasonably high for the risks being assumed, the plan of operation may not be “sound,” in that it might face adverse results upon an IRS audit. Regulators are not actuaries and often have had little or no underwriting experience. They, therefore, could not and should not “nit-pick” a particular premium or coverage. But some applications may be so egregious on their face that even non-insurance people can legitimately question the efficacy of the captive’s business plan. Insurance professionals know from both experience and nationally published studies that the cost of risk for most companies is less than 2% of revenue. “Cost of risk” includes losses not covered by traditional third-party insurance, which are generally the type of losses covered by “small” captive insurance companies. If a captive regulator receives an application in which the “cost” of coverage by that captive is, say, 10% to 12% or more of the revenue of the insured, alarm bells should go off. That captive certainly would have plenty of assets to cover its policy obligations! But in the overall scheme of things, including the real world of taxation, that business plan is not likely “sound.” At that point, the regulator has a choice of rejecting the applicant, requiring a change in the business plan/premiums or demanding additional support for the proposed plan. We are aware of one case in which the captive regulator required the applicant to provide a rate analysis from an independent actuary when he received an application whose premiums did not appear reasonable. A rate analysis is not, of course, a guarantee that the IRS will find the premiums acceptable on audit. No one can expect guarantees, but a properly done rate analysis has a better chance of assuring all the parties that the captive has been properly formed as a real insurance company and not simply as a way to reduce the taxable income of the insured and its owners. Captive insurance regulators have a big job, particularly as the pace of captive formations increases. To protect the domicile from appearing on the front page of the Wall Street Journal, the regulator must consider all aspects of the proposed captive’s business, including, in extreme cases, its vulnerability to adverse federal tax rulings.

The Thorny Issues in a Product Recall

A product recall can devastate a company's reputation and cut market share -- even if it is handled perfectly and the brand is a great one.

|
In 1982, people in Chicago began dropping dead from cyanide poisoning, which was linked to Johnson & Johnson’s Tylenol in select drug stores. Johnson & Johnson immediately pulled all Tylenol from the shelves of all stores, not just those in Chicago. It was ultimately determined that the product had been tampered with by someone outside of Johnson & Johnson. But the company's aggressive actions produced a legend: The Tylenol scare was chalked up as the case to review for an effective brand-preserving (even brand-enhancing) product recall strategy. In 2011, though, the FDA took the extraordinary step of taking over three Johnson & Johnson plants that produced Tylenol because of significant problems with contamination. This time, Johnson & Johnson could not blame a crazed killer, only itself. A company that should have learned from its own celebrated case study had not retained that knowledge 30 years later. The problems associated with recalls often aren't the recall itself. In a recall, stores pull the products, and the media helps get the message to those who have already purchased the product to return them for refunds, replacement, repair or destruction. One problem crops up when companies are too slow to move. It was revealed in the press in June 2014, that GM allegedly knew of its ignition switch problems seven years before it recalled the product. The recall that began in February 2014 itself became tortuous as new models were added almost daily to the list of cars that were in danger of electrical shutdown while in motion. The press, the regulators and, of course, the lawyers pounced on GM for its alleged withholding of information for so long and for the seemingly endless additional recall of cars affected by the problem. In 2015, regulators have called meetings with GM and other auto manufacturers mired in what has become an epidemic of recalls to discuss why repairs are dragging on so long. Denial, lack of information, hunkering down (bunker mentality), secrecy, silo mentality and fears for the impact on the bottom line all contribute to disastrous recalls. With all recalls, there is the cost of the recall, the cost of complete or partial loss or loss of use of certain products, repair costs in some cases (GM), regulatory scrutiny and fines, class action and other lawsuits and the loss of potential income during any shutdown. These can all be big-ticket items, and some companies will not survive these expenses and loss of revenue. Probably the biggest cost of any recall is the cost to reputation, which can mean loss of existing and future customers. In recent years, lettuce growers and a peanut warehouse did not survive recalls over contaminated products. In the case of primary agricultural producers like growers and peanut warehouses, the processors simply change suppliers, leaving the primary producers without any customers. In the retail market, the competition for shelf space is high. Brands that are recalled that are new or that do not have high customer value are simply barred from shelf space, effectively destroying the ability to market their products. However, there are others that have strong brand following and even cult-like status in local markets. Blue Bell Creameries (famous for its ice cream) is one such company that has secured an almost cult-like following in the Southern and Midwestern states. Blue Bell, founded in 1907, maintains its headquarters in the small town of Brenham, TX (pop. 16,000). Problems began when hospitals in Arizona, Kansas, Oklahoma and Texas reported patients suffering from an outbreak of listeria-related diseases, some as early as 2010. Some reports included the deaths of patients. On May 7, the FDA (Food and Drug Administration) and CDC (Centers for Disease Control and Prevention) reported, “It wasn’t until April 2015 that the South Carolina Department of Health and Environmental Control during routine product sampling at a South Carolina distribution center, on Feb. 12, 2015, discovered that a new listeria outbreak had a common source, Blue Bell Chocolate Chip Country Cookie Sandwich and the Great Divide Bar manufactured in Brenham Texas.” Listeria is a bacteria that can cause fever and bowel-related discomfort and even more significant symptoms, especially in the young and elderly. Listeria can kill. Listeria is found naturally in both soil and water. Listeria can grow in raw and processed foods, including dairy, meat, poultry, fish and some vegetables. It can remain on processing equipment and on restaurant kitchen equipment, and when food comes in contact with contaminated equipment the bacteria finds a ready-made food source in that food and multiples. The FDA has issued guidance reports to food processors, preparers and restaurants on how to prevent listeria contamination. This includes proper preparation techniques, cleaning techniques, hygiene, testing and manufacturing and processing methodologies. Once Blue Bell understood that its cookie sandwiches and ice cream bars were implicated, the company immediately recalled the products. But soon it became evident to Blue Bell and others that this outbreak might not be limited to the ice cream bars or cookie sandwiches, and Blue Bell recalled all of its product and, to its credit, shut down all manufacturing operations. The FDA conducted inspections of Blue Bell plants, and in late April and early May produced reports on three plants, noting issues of cleanliness and process that were conducive to listeria growth. The FDA has also reported that Blue Bell allegedly had found listeria in its plants as far back as 2010 but never reported this to the FDA. As of this writing, Blue Bell plants are still shut down. The FDA investigation has come to a close, but many questions remain. The company has cut 1,450 jobs, or more than a third of its work force, and has said it will reenter the market only gradually, after it has proved it can product the ice cream safely. The question is whether these things Blue Bell has done: the quick recall, first of the problem products and then all products, and the closure of plants to mitigate contamination issues are enough to save Blue Bell from further damage in the eyes of consumers and the stores that sell the product. There are many tough questions to be answered going forward. In the intervening months, will competitors replace Blue Bell with their own products that consumers feel will compare favorably? If so, when Blue Bell products are returned to stores will consumers return, or has the stigma of listeria and the acceptance of the taste of comparable products weakened the brand? Will stores give Blue Bell adequate shelf space? And, does Blue Bell have enough of a cult following and viral fan base that once product is back in stores customers will return as if nothing had happened? These are the scary questions that affect all food and drug companies when recalls are from contamination in their own plants or those in their supply chain. The American consumer seems to have become numb to the endless succession of automobile recalls from just about all manufacturers. We dutifully return our vehicles to the dealer to fix a broken or faulty this or that. Even though many recalls involve parts or processes that could cause car accidents, injuries and deaths, it is as if we have come to accept faulty auto products as the norm. This is not the case with food-borne illnesses. The fact that a faulty car can kill as easily as a contaminated food product seems not to be an issue as people return again and again to buy new cars from the same car manufacturer that issued five recalls on their last purchased model. However, consumers will shun the food brand that made some people ill. This bifurcated approach to risk makes no sense even in the context of protecting children from harm. The faulty car that mom drives the kids around in every day may have the same probability of injuring or killing her child as the recalled food brand. She doesn’t abandon her car, but she bans the recalled food brand from her table. In 1990, Perrier discovered benzene in its sparkling water product. It quickly recalled all its product but then hunkered down into a bunker mentality. The lack of communication by Perrier about the problem and what it was doing exacerbated the fears of consumers, and the press speculation and outcry ran high. Perrier had always touted the purity of its water, so toxic benzene shattered this claim. Hunkering down reduced consumer confidence, and many left Perrier for suitable alternative products. Perrier has never regained the market share it had previously. Blue Bell has taken the time to do things right, to find the causes of the problem and take steps necessary to prevent contamination in the future. But time also means that existing or even new competitors with comparative products will try to fill the shelf space vacated by Blue Bell’s absence. You can be sure that other-region favorites with cult followings that could never before gain a foothold in Blue Bell’s territory have been pressuring retailers to try them out as a replacement for Blue Bell. Is the Perrier loss of market share inevitable for Blue Bell even if Blue Bell communicates adequately and with transparency? Time will tell. For now, Blue Bell not only has to fix the problems of plant cleanliness, it also needs to address emerging questions about its past operations, such as allegedly not reporting to the appropriate While we note the good press that surrounded the 1982 Tylenol (external-tampering) recall and have seen so far a good effort by Blue Bell to resolve its own plant contamination issue, ultimately it is contamination that is the problem. Companies can become complacent, let cleanliness slide, use outmoded procedures, not replace older equipment or even ignore warning signs and isolated contamination events. Regional and limited product line companies need to be especially cognizant that even though they have carved out a powerful niche in the marketplace, maintaining this niche is tenuous at best in the highly competitive world of food products. Cleanliness and contamination-free are assumed by consumers. Food processors and manufacturers must do everything possible to keep that assumption from becoming contradicted.

The Sad State of Continuing Education

Should it really be possible to spend minutes on a continuing education course and get hours of credit? One that's open book? On ethics?

About 25 years ago, I attended an education committee meeting at the Southern Agents Conference in Atlanta. Continuing education (CE) had really just gotten started in some states. At this meeting, legendary insurance educator Bob Ross, of the Florida Big I, literally stood on his chair at the conference table and declared that mandatory CE would be the death of quality education. Has his prediction come true? Four years ago, I posted the following on a LinkedIn discussion: "A colleague related a recent experience to me last week. He went to one of the best known online insurance CE web sites and signed up for a course titled "Consumer Insurance." He registered as a new user in the system, perused the course catalog, signed up for the course, skipped the course material, took the test, and earned 3 hours of CE credits. All in 16 minutes. "He was also able to save the exam and email it to me (and, of course, anyone else taking the course). The test was loaded with vaguely worded questions and misspelled words and insurance terms (like "vessals" and "ordinance IN law" coverage). For some test questions, no right answer was listed or more than one answer was correct. "In the spirit of one-upmanship, I told him about my experience 11 years ago when online CE was just getting started. I registered at a vendor’s web site and, like him, went straight to the test. I forget the exact total time required to register and take the 50-question test, but it was around a half hour I think and definitely less than an hour. The CE credit for this personal auto course? 25 HOURS. To quote the late Jack Paar, 'I kid you not.' "Afterward, I browsed the material, and it was full of general consumer-type information taken directly from the Insurance Information Institute. The hours of CE credit granted by the state DOI were based on a word count with complete disregard to the difficulty level. "One thing I remember about this vendor was that it used what it called “Split Screen Technology.” What that meant was, while you were taking the test on one side of the screen, you could view the course content that went with that test question topic on the right side and browse for the answer to the question. Browsing for the answer was easy, given that the relevant information was highlighted. "So where are we 11 years later? Apparently in the same boat, except that online insurance education is much more pervasive than it was then. You can get two years of CE credit for as little as $39.95. A great bargain if your interest is in regulatory compliance and not actually learning something that will benefit you, your agency and the consumers and businesses you serve...." "Is there no accountability? Is there no desire to truly educate ourselves? Does anyone care? Is anyone listening?" Flash forward to 2015…. An agent and friend I know – good agent, CE course instructor, upstanding guy – waited until the last minute to complete his biannual CE requirement last year. So he went online, found the course he wanted, signed up, went straight to the exam, and in 23 minutes had completed three hours of CE credits. As they say, the more things change, the more they stay the same. And, did I mention that the course was to comply with his state’s three-hour ETHICS requirement? There is an online insurance forum with a discussion called, “Any Suggestions on Best Online CE Site?” It has comments such as: “I use XXXXX.com. About $35 for 21 hours of credit. Takes a few hours (maybe two) to finish and is open book.” My tongue-in-cheek response (recalling my agent friend’s experience a few months earlier) was, “I hope it wasn’t an ethics course!” The poster's response: “Huh? I guess you think each hour of CE should take an hour? Unless it’s a LIVE CE class… CE courses don’t take that long. I get unlimited CE from [provider’s name] for $39.95 per year… including a 16-hour Ethics CE course… that takes me about 15 minutes to complete. And, yes, they are open-book courses, too.” On another discussion board, someone was touting a “Fast, Easy, and Affordable Continuing Education” website. No mention of the quality or relevance of the course material or whether there is any actual learning involved. The site proudly proclaims a passing ratio of “over 98%.” What would regulators do if the passing ratio of their licensing exams were more than 98%? I suspect they’d insist that the exams be made a little tougher. Is any exam a legitimate test of learning if the passing ratio approaches 100%? Then why do regulators allow online CE programs that take a half-hour to get 20 hours or more of CE credit and include exams with passing ratios near 100%? The web site in question has 91 reviews…NONE of them mention whether the reviewer actually learned anything. (If you're actually looking to learn, the best place to start looking is your own agent association, which has a vested interest in providing you with the best education possible.) So what do you think? Am I just a grumpy old man? Should anything be done about the diploma mills that have proliferated? If so, what? If not, why not?

Debunking 'Opt-Out' Myths (Part 1)

Some myths are based on misunderstanding -- some on misinformation spread by those with a vested interest in preserving a flawed system.

Those who believe in the current workers’ compensation system share objectives with those who believe that companies should have the ability to “opt out.” We all want quality care for injured workers, better medical outcomes, fewer disputes, a fair profit for insurance companies and the lowest possible costs to employers. However, supporters of “options” to workers’ compensation object to a one-size-fits-all approach to achieving these objectives. They want to be able to either subscribe to the current workers’ comp system or provide coverage to workers through other means. The Texas nonsubscriber option has proven beneficial for injured workers, employers and insurance carriers for more than 20 years. The Oklahoma Option has been in effect for one year and is delivering promised results for injured workers and employers, including lower workers’ compensation costs. Legislation to provide for options in Tennessee and South Carolina was introduced earlier this year. New laws need to be studied carefully. They take time to develop, understand and implement. Injury claims also take time to properly process and evaluate. That is part of the challenge. It takes time to develop the facts of every claim and to hear everyone’s story. The true test of whether a law or new system works is the outcomes it produces over time. Option opponents should take some time to review the results being achieved now in Texas and Oklahoma, and the fact that the Tennessee and South Carolina options are built upon the exact same principles that have led to happier employees and substantial economic development. To cover the issues related to workers’ comp options, I am writing an eight-part, weekly series. This overview is Part 1. The remaining seven will be: Part 2: Low-Hanging Fruit – Dispelling some of the most common myths about workers’ comp options Sometimes, these myths are simply because of misunderstandings. Sometimes, they are outright lies in a desperate attempt to maintain the status quo for workers’ compensation programs that are championed only by a subset of interested insurance carriers, regulators and trial lawyers. Part 3:  Homework and Uninformed Hostility Everyone complains about the inefficiencies, poor medical outcomes, cost shifting and expense of workers’ compensation systems until a viable, proven solution is presented. Then, suddenly, everyone loves workers’ comp? It’s time to take a breath and look at some homework. Part 4: Option Impact on Workers’ Compensation Systems and Small Business Does an option force employers to do anything? Does an option force changes to the workers’ compensation system? Are all workers’ compensation carriers opposed to options? Should past workers’ compensation reforms just be given more time to take hold? Do options hurt the state system by depopulating it of good risks? Do options increase workers’ comp premiums for small business? Is the option just for big companies, and they all elect it? Part 5: Litigation Uncertainties Are Texas negligence liability claims out of control?  Should Oklahoma Option litigation delay other state legislatures? Should Oklahoma Option litigation further delay employers from electing the option? Does an option create animosity between business and labor? Part 6: Option Program Transparency and Other “Checks and Balances” Are immediate injury reporting requirements unfair? Are option benefits simply paid at the discretion of the employer? Are option programs “secretive” and provide no “transparency?” Are there other “checks and balances?” Part 7: Option Program Benefit Levels and Liability Exposures Are option benefits less than workers’ compensation benefits? Are option benefits less than workers’ compensation because of taxes?  Where do the savings come from? Part 8: Impact on State and Federal Governments Do option programs shift more cost to state and federal governments? Do option programs increase state and federal regulatory costs? Do option programs give up state sovereignty over workers’ compensation?

Bill Minick

Profile picture for user BillMinick

Bill Minick

Bill Minick is the president of PartnerSource, a consulting firm that has helped deliver better benefits and improved outcomes for tens of thousands of injured workers and billions of dollars in economic development through "options" to workers' compensation over the past 20 years.

'4-Lanes' Approach to Work Comp Claims

Work comp claims operations have become a key part of the customer value proposition, so it's crucial to analyze them the right way.

Claims operations have ascended the value chain from an “island in the stream” technical function into a key facet of the customer value proposition. To handle the growing demands, it's important to think about work comp claims in terms of four lanes. The first lane is governed by compliance rules and requires not just compliance awareness, but the knowhow to optimally integrate compliance into the operation. The second lane is focused on vendor management. This needs to go beyond simply outsourcing non-core competencies. Successful companies concentrate on ways to leverage vendors to achieve superior outcomes and competitive advantage. The third lane is defined by business rules. This is where automation is fully deployed and constantly improved. This lane draws from rules-driven facets of each of the other three lanes. The fourth lane is the “interpersonal, interpretative and professional judgment” perspective. It relies on the subjective application of knowledge and human interaction. This lane leverages engagement, training, technology and analytics to continuously accelerate accurate decision making, enhance performance and improve quality. The four lanes represent perspectives and should not be confused with a company’s organogram. Indeed, each lane touches every facet of any organogram found in the insurance industry today. The compliance, vendor management, business-rules and professional judgment lanes all benefit from a strong commitment to business process improvement (BPI). Data capture and analytics that support measurement of performance along the entire claims’ value chain is integral to BPI. The BPI discipline uses data to identify best practices, implement those practices, assess their effectiveness and uncover opportunity for further improvement. Embracing the four-lane view and BPI model will help carriers make strong, data-based decisions as they reconfigure their claims departments to control costs, stabilize case reserving and improve outcomes of their claims operations. Great tools, talented people and sound business practices are the timeless ingredients of success, as is operational adaptivity. Today’s workers’ compensation carriers are operating in an environment of increased uncertainty and complexity. Carriers face headwinds because of a shift into a healthcare-centric business, which has caught many carriers flat-footed. Medical costs are approaching 70% of the total claims spending in many jurisdictions. The utilization and cost of pharmaceuticals is rising at a rapid rate. According to the California Workers’ Compensation Institute, pharmacy and home-medical-equipment costs have risen by  more than 250% since 2004. Today’s companies must adapt their models to concentrate on effective and efficient delivery of care that improves patient outcomes, exudes customer value and underpins superior combined ratios. The undeniable reality is that the nature of work comp claims has changed. Traditional ideas on the core competencies necessary to operate an effective claims operation need to be challenged and adjusted. Positive differentiation and sustainable market leadership depend on effectively incorporating the ingredients of success into a well-defined strategy that produces desired results and provides an agile framework for continual business evolution.


T. Hale Johnston

Profile picture for user HaleJohnston

T. Hale Johnston

Hale Johnston formed Hale Strategic Consulting to help organizations navigate and thrive in an increasingly competitive environment. During his 20-plus-year career in insurance, he has led every facet of the workers’ compensation insurance value chain.

Are We Finally Getting Close to a Single View of the Customer?

A single view of the customer is now paramount. Three questions must be addressed.

The concept of the single view of the customer has been around for ages. I remember some significant single-view projects of insurers from the early 1990s. Many insurers have continued to strive toward this elusive goal. Now that the customer experience is front and center in insurers’ strategies, it’s time to revisit the single view, aka the 360-degree view of the customer. Three questions must be addressed: 1) What is single view anyway? 2) How does it relate to the customer experience? 3) What is the state of the industry? What is a single view anyway? Having a single view of the customer means that the insurer and any front-line individual dealing with the customer understand the full context of the relationship. This normally includes information such as the customer’s personally identifiable information (PII), products currently owned, relationship history and the named agent/producer (if applicable). Ideally, this is summarized for a quick snapshot of the relationship. Creating a single view is complicated by two main factors. First, the complexity and often siloed natures of IT systems make it difficult to have a common view. The evolution of channel options and technologies makes it a constant challenge to coordinate across the different systems. The second challenge relates to any company using independent agents, financial advisers, brokers or other producers. The producer does not typically share all the customer information with the insurer. Usually, the producer just passes along the minimum information needed for underwriting and servicing the customer. In addition, the producer may place insurance coverages for a customer with multiple insurance companies. The producer may have more of a single view of the customer’s insurance and financial services products and needs than the insurer has. How does single view relate to the customer experience? As challenging as it is, creating as complete a picture of the customer relationship as possible is essential today. Improving the customer experience is an SMA 2015 Imperative and a top strategic initiative for many companies and a key driver of business and IT strategies. Any touch point, whether human or digital, should be informed by the context of the customer relationship. An agent, customer service rep or adjuster should understand the value of the policyholder. While everyone is entitled to fair service that satisfies the contractual obligations, the level of attention and expertise applied might vary for an individual or business with a 20-year history and multiple policies vs. a new customer with just one small policy. A related question is, “Does the customer have a single view of the insurer?” Customers do not want to repeat information, experience delays in service, be presented with incorrect information or find that their favorite mode of interaction is not available or current. This is causing insurers to move toward providing an omni-channel environment, enabling policyholders to interact using whatever devices and channels they want at any time -- and making the transactions and interactions transfer across channels in real time. Ultimately, providing a world class customer experience requires the insurer to have a single view of the customer and vice versa. What is the state of single view in insurance? How many insurers have actually achieved this single view? According to SMA research, 35% say they have assembled a single view, but only 8% can present that view in real time to the individuals interacting with a policyholder. Another 46% do not have single view today but are working on it. There are also significant differences based on the size of the insurer. More insurers with less than $1 billion in premium actually have a full single view in real time than do their counterparts with more than $1 billion. This is likely because the smaller companies' product and distribution environment is less complicated than that of the larger companies. On the other hand, virtually all insurers with more than $1 billion are at least working on achieving a single view, whereas 30% of the smaller companies have no plans. Single view and improving the customer experience are inextricably linked. Business and IT strategies and plans should always consider the implications for both of these objectives.

Mark Breading

Profile picture for user MarkBreading

Mark Breading

Mark Breading is a partner at Strategy Meets Action, a Resource Pro company that helps insurers develop and validate their IT strategies and plans, better understand how their investments measure up in today's highly competitive environment and gain clarity on solution options and vendor selection.

Updating Your Models for Hurricane Season

The North Atlantic hurricane season has begun, and CAT models need to be updated to remove the possibility of major losses.

June 1 opened the North Atlantic hurricane season, with this year marking the 10th anniversary of one of the costliest storms to make landfall in the U.S. — Hurricane Katrina. Each year, hurricane season puts catastrophe (CAT) models to the test, with potentially millions of dollars riding on their accuracy. The loss estimates calculated by CAT models can play an important role in protecting your organization from financial loss. The models have changed a lot over the past several years. For example, Hurricane Andrew in 1992 exposed the shortcomings of traditional actuarial methods that insurers use to model risks. And the billions of dollars in insured losses from Hurricane Katrina in 2005 helped lead to today’s CAT modeling rigor and its universal acceptance and use by the industry. New Storms Change CAT Models CAT models use algorithms to estimate potential losses stemming from a catastrophic event. Over the 10 years since Katrina, CAT modeling has become more complex because of technology improvements and the greater availability of data. After a significant storm, the models are updated based on the new data and a larger body of knowledge. These changes could considerably affect your property insurance and risk management strategies. Here are some CAT modeling factors — which for U.S. hurricane exposures have changed several times in the last few years. You should consider the items below as you prepare for this year’s hurricane season:
  • Check your policy, including deductibles, coverage limits and sublimits, to ensure they’re adequate and realistic; check that exclusions are acceptable.
  • Ensure the quality of your CAT modeling data. Incomplete data causes more uncertainty for insurers; improving the data enables more accurate loss estimates and reduces the uncertainty for the underwriters.
  • Take a big picture view of your CAT exposures. By modeling your worldwide portfolio, you can identify regional drivers, which can help put U.S. hurricane risks in perspective. Also, using actuarial resources after a CAT or non-CAT claim can help evaluate your organization's total cost of risk (TCOR), which can better inform how you use your risk management resources.
If you have locations in CAT-prone areas, you can fine-tune their CAT loss estimates with an understanding of how they’ve changed with each model update. Aligning your risk data with CAT modeling changes can yield better outputs for insurers to underwrite your risks. To register for a webinar on June 17, 2015, on the lessons from Hurricane Katrina, click here.

Cheryl Fanelli

Profile picture for user CherylFanelli

Cheryl Fanelli

Cheryl Fanelli delivers best-in-class analytics for modeling catastrophe exposures for clients. As manager of Marsh's CAT Modeling Center of Excellence, Fanelli and her team work closely with consultants and brokers to review, analyze and model client property data against models of historical or potential catastrophic events.

How Workers' Comp Is Like Golf

The workers' comp process is like my golf game. Both start out well enough but then go sour, and every "fix" just makes things worse.

Last Thursday, I snuck off early for a round of golf before heading into the office. We teed off at 6:45AM and were done by 9:40. A quick jaunt by the house for a shower and a change of clothes, and I was in the office by 10:30, with few the wiser regarding my absence. We've had a fairly low-humidity summer thus far, and it was a beautiful morning for golf, just a perfect day. At least it was until we teed off. I know this will be a shock, given my stunning physique, but I am not a very athletic person. I've never experienced runner's high. I get winded driving a four-minute mile. I've broken a leg playing soccer. I've broken my other leg ice skating. I once asked a kick boxing instructor if there would be doughnuts served after class. Needless to say, I am not a good golfer. My golf bag holds a chainsaw. My golf cart has four-wheel drive and big knobby tires. My Garmin watch occasionally asks me if I've stopped golfing, because it has detected that I've left the course. I use my Mulligan early, and then use a Poblaski, Heinrich, Gonzales, Ming and Schwartz -- all Mulligans of different ethnicities. The only birdie I've ever shot flew away unharmed after my ball hit it. The only time I've ever hit two good balls in a round is when I stepped on a rake (okay, you knew I had to put that one in). I asked my instructor how I could shave 10 strokes off my game. He told me to skip the 18th hole. He once looked at my scorecard and said, "Congratulations, you bowled a perfect game!" Yes, I generally suck at golf. Workers’ compensation is a lot like my golf game when you think about it. You enter with the greatest of hopes and expectations, and by the end you are just glad to be done with it – and you might be missing an arm. The game starts off okay. As it progresses, and more shots go astray, people start to help and try to “fix” my game. You’re hitting behind it. You’re hitting in front of it. You’re pushing through it. You’re topping it. You’re hitting across it. You’re teeing it too high. You’re teeing it too low. You’re not hitting it. You’re trying to kill it. Center the ball in your stance. Keep your head down. Bend your knees. Watch your alignment. Choke up on the club. Swing with your hips. Stand on one leg. Place your left elbow behind your ear. Pull your head out of your behind (just wanted to make sure you were still paying attention). Open your stance. Close your stance. Put the gun down, I’m only trying to help! With each “fix,” the results get progressively more convoluted; to the point where, by the 14th hole, I find myself playing through some family’s dining room while apologizing for their plate glass window. I also comment that I love the new curtains. They are much nicer than the ones there the last time I played this house. By the 16th hole, my shots are being independently reviewed, and the ranger is telling us to move it along. By the 17th hole, I have a lawyer. By the 18th hole, after all the reforms, all the fixes, every legislative tweak, you can’t even recognize what I am doing as golf anymore. My lawyer must first clear my intended shot with the golf committee, with everyone weighing in on club, stance, approach and strategy. My stance has transformed to one that most closely resembles a crushed aluminum can, and I am facing the wrong direction. I no longer remember what the front nine looked like. By the time I’m done I wonder why I wanted to try this in the first place. Duffers from the industry will understand this comparison. We continually reform and attempt to improve workers’ comp but see wilder and more inconsistent results in return for the effort. Perhaps it is time to return to the first tee, in a new round, so that we can remember why we are there in the first place.

Bob Wilson

Profile picture for user BobWilson

Bob Wilson

Bob Wilson is a founding partner, president and CEO of WorkersCompensation.com, based in Sarasota, Fla. He has presented at seminars and conferences on a variety of topics, related to both technology within the workers' compensation industry and bettering the workers' comp system through improved employee/employer relations and claims management techniques.

Questions on Massive Government Hack

The hack makes clear that we won't solve the crisis if we just keep doing what we're doing. We have to start asking better questions.

True or false? There was no way the Office of Personnel Management could have prevented hackers from stealing the sensitive personal information of 4.1 million federal employees, past and present. If you guessed “False,” you’d be wrong. If you guessed, “True,” you’d also be wrong. The correct response is: “Ask a different question.” Serious data breaches keep happening because there is no black-and-white answer to the data breach quagmire. So what should we be doing? That’s the right question, and the answer is decidedly that we should be trying something else. The parade of data breaches that expose information that should be untouchable continues because we’re not asking the right questions. It persists because the underlying conditions that make breaches not only possible, but inevitable, haven’t changed—and yet we somehow magically think that everything will be all right. And of course we keep getting compromised by a short list of usual suspects, and there’s a reason. We’re focused too much on the “who” and not asking simple questions, like, “How can we reliably put sensitive information out of harm’s way while we work on shoring up our cyber defenses?” According to the New York Times, the problems were so extreme for two systems maintained by the agency that stored the pilfered data that its inspector general recommended “temporarily shutting them down because the security flaws ‘could potentially have national security implications.’” Instead, the agency tried to patch together a solution. In a hostile environment where there are known vulnerabilities, allowing remote access to sensitive information is not only irresponsible — regardless of the reason — it’s indefensible. Yet according to the same article in the Times, the Office of Personnel Management not only allowed it, but it did so on a system that didn’t require multifactor authentication. (There are many kinds, but a typical setup uses a one-time security code needed for access, which is texted to an authorized user’s mobile phone.) When asked by the Times why such a system wasn’t in place at the OPM, Donna Seymour, the agency’s chief information officer, replied that adding more complex systems “in the government’s ‘antiquated environment’ was difficult and very time-consuming, and that her agency had to perform ‘triage’ to determine how to close the worst vulnerabilities.” Somehow I doubt knowing that protecting data “wasn’t easy” will make the breach easier to accept for the more than 4 million federal employees whose information is now in harm’s way (or their partners or spouses whose sensitive personal information was collected during security clearance investigations, and may have been exposed as well). A New Approach The game changer — at least for the short term — may be found in game theory. In an “imperfect information game,” players are unaware of the actions chosen by their opponent. They know who the players are, and their possible strategies and actions, but no more than that. When it comes to data security and the way the “game” is set up now, our opponent knows that there are holes in our defenses and that sensitive data is often unencrypted. Because we can’t resolve vulnerabilities on command, one way to change the “game” would be to remove personal information from systems that don’t require multifactor authentication. Another game changer would be to only store sensitive data in an encrypted, unusable form. According to Politico, the OPM stored Social Security numbers and other sensitive information without encryption. This fixable problem is not getting the attention it demands, in part because Congress hasn’t decided it’s a priority. The U.S. is not the only country getting hit hard in the data breach epidemic. The recent attack on the Japanese Pension Service compromised 1.3 million records, and Germany’s Bundestag was recently hacked (though the motivation there appeared to be espionage, according to a report in Security Affairs). According to an IBM X-Force Threat Intelligence report earlier this year, cyberattacks caused the leak of more than a billion records in 2014. The average cost for each record compromised in 2014 was $145 and has increased to $195, according to Experian. The average cost to a breached organization was $3.5 million in 2014 and is now up to $3.8 million. More than 2.3 million people have become victims of medical identity theft, with a half million last year alone. Last year, $5.8 billion was stolen from the IRS, and the Treasury Inspector General for Tax Administration predicts that number could hit $26 billion by 2017. If you look at the major hacks in recent history — a list that includes the White House, the U.S. Post Office and the nation’s second largest provider of health insurance — it would seem highly unlikely that a lax attitude is to blame. But a former senior administration adviser on cyber-issues told the New York Times about the OPM hack: “The mystery here is not how they got cleaned out by the Chinese. The mystery is what took the Chinese so long.” During this period when our defenses are no match for the hackers targeting our information, evasive measures are necessary. I agree with White House Press Secretary Josh Earnest, who said, “We need the United States Congress to come out of the Dark Ages and actually join us here in the 21st century to make sure that we have the kinds of defenses that are necessary to protect a modern computer system.” But laws take a long time, and we’re in a cyber emergency. The question we need to ask today is whether, in the short term, the government can afford not putting our most sensitive information behind a lock that requires two key-holders — the way nukes are deployed — or storing it offline until proper encryption protocols can be put in place.

Adam Levin

Profile picture for user AdamLevin

Adam Levin

Adam K. Levin is a consumer advocate and a nationally recognized expert on security, privacy, identity theft, fraud, and personal finance. A former director of the New Jersey Division of Consumer Affairs, Levin is chairman and founder of IDT911 (Identity Theft 911) and chairman and co-founder of Credit.com .

A Blind Spot for Independent Agents?

An Accenture survey finds low interest in technology among independent agents, which could impede their ability to provide exceptional service.

|
In our survey of nearly 1,200 independent agents (IAs) in the U.S., we discovered that IAs generally don’t see technology as the answer to their needs. Indeed, they ranked digital capabilities as fifth out of 12 overall priority areas. In this Insurance Chart of the Week, we’ll examine IAs’ attitudes toward technology. Independent agents’ lower regard for some digital capabilities could impede future success chart Given customers’ changing expectations for how service is delivered, this disconnect—between IAs’ intent to focus on their customers and their lower regard for omni-channel capabilities—could impede IAs’ ability to continue to offer exceptional customer service. Mobile and social media capabilities, in particular, could help IAs offer the tailored, responsive experience that many customers have come to expect and demand. Learn more:

Michael Costonis

Profile picture for user MichaelCostonis

Michael Costonis

Michael Costonis is Accenture’s global insurance lead. He manages the insurance practice across P&C and life, helping clients chart a course through digital disruption and capitalize on the opportunities of a rapidly changing marketplace.