Download

Why Private Firms Should Buy D&O

In addition to the obvious reasons, D&O can reduce the stress of a lawsuit and prevent a management team from splitting apart.

It is a fact of doing business in the U.S.: Lawsuits happen! Regardless of whether the action has any merit, lawsuits are expensive to deal with, damaging to reputations and draining to a business and its management. Small to mid-sized private companies can specifically attest—litigation is never a small or inconsequential matter. Any business, regardless of the sector it is in (manufacturing, service, agricultural, transportation, energy, technology), can find itself embroiled in a dispute. Disputes can arise from relationships gone sour with shareholders, competitors, regulators, creditors, or even a random third party. Directors, officers and company (“D&O”) liability insurance for privately held companies can be a lifesaver in the event an unexpected lawsuit or dispute arises. When a business and its management team are placed in an adversary’s crosshairs, a D&O policy can step in to respond right off the bat. This response would include providing a defense, including the engagement of skilled legal counsel who will guide the D&Os through the process. In addition, when coverage applies, the D&O policy would fund the settlement of a lawsuit, or pay a judgment if the case were to go to trial. Originally, D&O coverage was designed to protect only the individual directors and officers from lawsuits brought by outside shareholders who are not involved in the management of the company. However, D&O products have evolved considerably over the past 20 years and now cover the entity as well as the D&Os for a wide range of management decisions and claims from shareholders, as well as clients, competitors, vendors and creditors. A disturbing fact for members of the company’s board of directors is that D&Os can, and usually do, get personally named in a lawsuit asserted against the company. The claim seeks personal liability against the D&Os. The more closely held a company is, the fewer owners/D&Os there are to sue, so the exposure to the personal assets of those principals is even more pronounced. D&Os know that, in most states, a corporation is required to indemnify its D&Os for personal liability, if it arose from the execution of their corporate duties. If the corporation is on financially sound footing, the D&Os' personal assets will usually be protected. However, situations often arise where the company cannot or will not defend a D or O, compelling them to defend themselves. Such cases can be when the company is not on solid financial footing or when it becomes insolvent. As troubling as it may sound, in tough financial times, the D&Os could find themselves paying for their own defense and settlement of a lawsuit out of their own pockets. When a lawsuit hits, the financial advantages of having D&O coverage is readily apparent. What isn’t evident from reviewing policies is something we’ve witnessed over the course of many D&O claims. When serious accusations of wrongdoing are leveled at a member of management and there’s no D&O coverage to fall back on to fund the claim, the financial burden of a dispute can tear a management team apart. For example, suppose you are the officer who is the target of certain allegations. How quickly do you think your colleagues will rally around you when your alleged error or omission is the cause of significant financial hardship to the company? Without D&O insurance in place to shoulder the financial and legal burden of a claim, infighting can erupt rather quickly when the company’s financial resources are placed in peril. When accusations fly, and salaries and bonuses might be affected, such situations often change the way people behave toward one another. As opposed to circling the wagons, executives may play the blame game. In contrast, if D&O insurance is in place, there may not be such a panic, and finger pointing may not be as fierce or important. Accordingly, we believe that one of the great hidden benefits of D&O insurance is that it tends to defuse internal turmoil and helps maintain management cohesiveness during what is surely a trying time. When D&O insurance is in place and coverage has been accepted, the management team will be able to easily maintain a “stick together” attitude and an “us against them” mentality. To summarize: We believe D&O insurance is imperative to carry for private companies and their principals.  D&O coverage acts as a solid backstop to mitigate or solve what could be the devastating financial impact of unforeseen business litigation. Litigation can happen at any time from within or from outside any organization.  In a society as litigious as ours, not having D&O insurance creates a serious exposure to the business itself, as well as every member of a company's management team personally. Make sure your private company customers, no matter what size or industry, carefully consider the purchase of D&O insurance to ensure that the company, as well as their personal assets, are protected.

Laura Zaroski

Profile picture for user LauraZaroski

Laura Zaroski

Laura Zaroski is the vice president of management and employment practices liability at Socius Insurance Services. As an attorney with expertise in employment practices liability insurance, in addition to her role as a producer, Zaroski acts as a resource with respect to Socius' employment practices liability book of business.

Your Competitors Aren't Who You Think

Silicon Valley companies may win because they focus on, for instance: customers, not the products they have to sell, and pulling, not pushing.

Historically, the insurance industry has assessed its competition by looking at those companies within the industry that directly compete and sell in the same market segments, offer similar products, use the same distribution channels or have similar market size. But in today’s fast-changing digital economy, this approach has become outdated, and insurers are being blindsided by new challengers and competitors from outside the industry. The new challengers may not compete directly by offering and underwriting insurance, but they are competing in new ways to capture customer relationship, pocketbook and much more through innovative offerings and business models. Where are all of these coming from? This new breed of competitors is coming out of the technology and Silicon Valley companies. There have been numerous articles and blogs regarding the potential of companies like Google, Apple, Amazon, eBay and other technology companies entering the insurance space, fueling speculation and, for some, even fear. Centuries-old industries and the companies within them are feeling the pressure to reimagine themselves, and insurance is one of them. Most of these new challengers have emerged in the last 15 years, many in the last 5 to 10 years. But the impact of these technology and Silicon Valley companies is only beginning. Why? Because of the massive numbers of users they have as engaged and loyal customers. Insurance companies pale in contrast. This contrast illustrates why these technology and Silicon Valley companies pose a competitive threat to insurance. The growth and influence by these industry challengers has greatly accelerated in the last three years. Adding fuel is their drive and commitment to emerging technologies and innovation. The insurance industry is one of tradition, based on decades, even centuries, of business assumptions and models that have changed little within a culture of risk aversion – yet insurance is now operating within a world changing rapidly, creating businesses and new customer engagement models and embracing new and emerging technologies. The insurance industry, like so many other industries, is feeling the tremors of a coming quake of seismic change that will redefine competitive boundaries and customer loyalty. We have identified and discussed key business attributes that detail the differences between these technology companies and insurers. The contrasts could not be more stark. The strategic vision of these organizations reflect the shifts and challenges of the new digital economy. The technology companies' focus is:
  • Customers rather than products
  • External rather than internal
  • Customer power rather than company control
  • Connecting people to an ecosystem rather than connecting people to the company
  • Pulling and engaging rather than pushing and informing
  • Selling an outcome rather than selling a product
  • Creating an experience rather than processing a transaction
Insurance is just beginning to experience the implications. As with other industries, this overwhelming change requires insurers to go back to the fundamentals and discuss: who we are, what we do, what we offer and how we offer it in this new digital era. Will we be a product manufacturer, an underwriter of products, a distributor of products, a provider of services or all of the above? And how will we leverage new and emerging technologies and redefine the customer experience? Disruption, convergence, change, technology adoption and the digital world are unfolding more rapidly than anyone realized, and many are unprepared. To be just viable, not to mention successful, in the new digital, customer-driven world, insurers must first leverage their deep expertise by providing risk transfer products and risk management services that meet the needs of customers. Second, insurers must decide if they will manage an ecosystem of partners to provide services to repair, reimburse and restore after loss events. Third and most transformative, insurers must decide if they are going to broaden their offerings beyond insurance to own and manage the customer relationship. Today, many insurers are accustomed to asking, “What products and services do we manufacture, and what channels do we use to sell them to grow the company profitably?” The insurance leaders of tomorrow will be asking, “How can we connect, educate and enhance our customer’s lives or businesses through innovative offerings that provide meaningful value in helping to manage risk in a changing world?” For the full research brief, "The Shifting Competitive Landscape: A New Breed of Industry Challengers," click here

Denise Garth

Profile picture for user DeniseGarth

Denise Garth

Denise Garth is senior vice president, strategic marketing, responsible for leading marketing, industry relations and innovation in support of Majesco's client-centric strategy.

The Most Valuable Document That Money Can Buy

Without a Reserve Funding Analysis, the insurer becomes the de facto reserve fund.

The Reserve Funding Analysis is one of the most important documents that an insurance provider can have for a property it covers. A Reserve Funding Analysis is a formal evaluation of the physical condition of a property and the expected future expenses that will be needed to keep the property in a viable state of repair. Having this funding statement out in the open on Day One discloses to the owner, the insurance company, financial institution and all other stakeholders the real cost of preserving the asset that everyone is vested in. For the insurance carrier, the analysis establishes the baseline property condition, helps ensure responsible operation of the property and provides the actual numbers that support appropriate pooling of the risks and accurate pricing of the insurance product. This is not trivial. Where there is an absence of knowledge and inadequate reserve funding, there is little incentive for the operators to mitigate major system exposures. Many times, market conditions such as “curb appeal,” trend-setting landscaping, new exterior paint or other highly visible amenity may receive disproportionately higher funding priority than the hidden major building system peril such as a potable water system rupture, building envelope failure, venting or water drainage intrusion. This situation represents a severe moral hazard.  The insurer becomes the de facto major system reserve fund. This is especially the case when the owner can claim no knowledge of imminent failure or will attribute the failure to one or more lesser contributing factors. While this is not quite as scandalous as TV crime dramas where the crook sets a building ablaze to collect the insurance money, owners are still making insurers unfairly responsible for more subtle perils such as catastrophic water system failures and building envelope failures such as roofs, water rot and foundation decay. These conditions are often difficult to see until there is a major failure. Or, they may be exacerbated, but not caused, by a storm, earthquake or natural risk. Piping systems corrode from the inside, becoming weaker while showing little or no indication of a problem until water is cascading down 12 floors of luxury condominiums. Building envelope water intrusions can go unnoticed until toxic mold appears in the venting or a deck collapses. Certain concrete cracks can allow water to enter invisible places, undermining the purpose of the foundation. Many of these perils are easily avoided with routine maintenance, assuming the owner is aware of and has budgeted for them. The most typical Reserve Study available on the market is a common tool for condominiums. The purpose is to protect mortgage holders and shared asset community members and to help set homeowners association (HOA) dues fairly and comparably (vs. alternate properties) while avoiding the need for imposing special assessments on the owners. Think about it. These are exactly the same business functions as the insurance carrier! Therefore, a specialized Reserve Study that meets the needs of the insurance carrier should be considered essential in any significant building, facility or property. The Building Condition Assessment The first step is to perform a comprehensive Property Condition Assessment by a registered professional engineer knowledgeable in the ASTM E2018-08 standard. No law exists that requires a professional engineer or architect to perform the inspection, but one is highly recommended by most attorneys in the event that the findings are legally challenged. The ASTM E2018-08 standard will provide visibility to major systems and components that are due to be replaced, that are failing or that are operationally sub-standard or outdated. The Condition Assessment can also reveal how a building is “aging.” For example, a 10-year-old building can reveal what the next 40 years will be like much better than a new building can. A quarter inch of settling after 10 years can be re-measured after 20 years. Corrosion or rot on the north side, but not on the south side, can tell an important story for the future. A good engineer can see these trends and predict future conditions with surprising clarity. Hiring a Consultant It is imperative that a licensed, professional, civil or mechanical engineer perform the assessment and reserves estimate. Many engineers closely align themselves with architects and other engineers, so it is important to inquire about their professional network. There are many Certified Inspection Professionals who are qualified to perform inspections, but the science of engineering can quickly become an integral part of the process. Water chemistry, corrosion science, water vapor diffusion and hydrocarbon compatibility, electronic logic controllers, etc., are the domain of engineering. Further, only the engineer would be qualified to allocate other engineers where needed -- the lineage should remain intact wherever possible because only engineers can come up with the numbers that fit the actuarial tables and can be upheld in court. Initial Project Review Once the responsible engineer has been retained, he will require a set of initial information about the build, property or facility. This will include:
  • As-built drawings and architectural specifications
  • The declaration and description
  • Reciprocal cost-sharing agreements
  • Previous reserve fund studies
  • The most recent audited financial statements
  • What the current annual contribution to the reserve fund is
  • Inspection of the current maintenance and repair record
  • A summary of (end-user) problems and concerns
The Process
  • The engineer is provided the above information. The as-built drawings and specifications are prior to visiting the site in order for the engineer to become familiar with the overall design and construction schemes. If they are absent or deficient, this is a red flag.
  • Site inspection is performed.  Problem areas are reviewed and documented.
  • The report is prepared. The drawings are used to “take-off” quantities such as roofing, exterior wall cladding, asphalt, hallway finishes, etc. that will assist in preparing the replacement/repair cost budgets.
  • The engineer presents a draft report to the insurer prior to its being finalized.
  • Upon receiving direction from the investors, the Reserve Fund Study is finalized and submitted.
The Report Format Every engineer may have a slightly different format, but, in general, the Reserve Funding Analysis has two main components: physical analysis and financial analysis. The analysis includes:
  • Inspection Report. Based on the results of the site inspection, the report will provide an itemized overview of the major common elements. This will include general condition, the need and timing for remedial work or replacement and any other information that the stakeholders should be aware of.
  • Information Tables. There is typically a table that summarizes the common elements in terms of current age, life expectancy, remaining service life and current and future cost budgets.
  • Expenditure Tables. The data from the information tables is summarized to show when the itemized common element repair/replacements are estimated to take place. For each year, these expenditures are summed. The annual projections must be a minimum of 30 years commencing in the year the study (and updates) is prepared.
  • Cash Flow Tables. Based on the estimated expenditures, different contribution plans can be provided. Often, one plan includes the contribution level currently being used as a form of comparison with other scenarios.
The Funding Plan As part of the Financial Analysis, the study must have a recommended funding plan projected over 30 years from the date of the study. The plan must show:
  • The estimated cost of major repairs and replacements based on current costs.
  • The same costs adjusted to account for an assumed inflation rate. The inflation rate must be stated in the study.
  • The opening balance of the reserve fund.
  • The recommended amount of contributions to the reserve fund determined on a cash flow basis that are required to offset adequately the expected cost in the year of the expected major repair or replacement common elements and assets.
  • An estimate of the interest earned on the reserve fund contributions based on an assumed interest. The study should state the assumed interest rate.
  • The percentage increase in annual contributions to the reserve fund for each year of the 30-year study.
  • The estimated closing balance of the reserve fund for each year.
Conclusion It may be surprising to know that much of this due diligence is already being performed by the owners, management firm, bank, attorneys, accountants, real estate brokers, etc.  However, it is the insurance carrier that must have the clearest probabilistic view of the property. The insurance carrier must be assured that due diligence has been performed by the right professionals, at the right time, and articulated in the right numerical form that serves the calculations of insurer, not just the calculations of the insured.

Dan Robles

Profile picture for user DanRobles

Dan Robles

Daniel R. Robles, PE, MBA is the founder of The Ingenesist Project (TIP), whose objective is to research, develop and publish applications of blockchain technology related to the financial services and infrastructure engineering industries.

'Montana Model' for Workers' Comp Fees

As state regulators try to limit growth in hospital costs, they should look at the delicate balance that Montana is trying to strike.

sixthings

Policymakers in many states increasingly enact medical fee schedules in the quest to limit the growth of hospital costs. They often seek a reference point or benchmark to which they can tie reimbursement rates. Usually, that benchmark is either Medicare rates in the state or some measure of historic charges by the hospitals. Medicare rates are usually seen by healthcare providers as unreasonably low; charge-based fee schedules are often seen by payers as unnecessarily high.

This study examines an alternative benchmark for workers’ compensation fee schedules—prices paid by group health insurers. In concept, this benchmark has certain advantages. Unlike Medicare, the group health rates are not the result of political decisions driven by the exigencies of the federal budget. Rather, these rates are the result of negotiations between the payers and the providers. Unlike a charge-based benchmark, group health rates are what is actually paid to providers. This is important given the growing public attention to the arbitrariness of many hospital charges.

The major limitation of using group health prices paid as a benchmark for workers’ compensation fee schedules is that these prices are seen by group health insurers as proprietary. However, one state, Montana, has adopted a fee schedule based on group health prices paid and implemented relatively straightforward processes to balance the need for a fee schedule and the need to protect the proprietary information of the group health insurers.

This article does the following: (1) describes the major findings of the study, (2) suggests a framework for thinking about whether prices paid by workers’ compensation payers are too high or too low, and (3) discusses the Montana approach.

Major Findings

What do we find when we compare the prices paid to hospital outpatient departments by group health and workers’ compensation payers? Among the major findings of this study are:

  • In many study states, workers’ compensation hospital outpatient payments for common surgical episodes were higher, and often much higher, than those paid by group health. For example, in half of the study states, workers’ compensation paid at least $2,000 (43%) more for a common shoulder surgery (see Figures 1a and 1b).
  • The amount by which workers’ compensation payments exceeded group health payments (“the workers’ compensation premium”) was highest in the study states with either no fee schedule or a charge-based fee schedule (Tables 1a and 1b).
5 2   3   4 Are Prices Paid By Workers’ Compensation Payers Too Low or Too High?

The comparison of workers’ compensation and group health hospital outpatient payments raises the question in many states as to whether workers’ compensation hospital outpatient rates are higher than necessary to ensure injured workers access to good quality care. For example, in Indiana, hospital outpatient services associated with shoulder surgery were, on average, reimbursed $9,183 by workers’ compensation as compared with $7,302 by group health. Is this differential of $1,881 necessary to induce hospital outpatient departments to provide facilities, supplies and staff to treat injured workers in an appropriate and timely manner?

Consider the following framework for analyzing the question. If hospital outpatient departments were willing to provide timely and good-quality care to group health patients at the prices paid by group health insurers, then two questions should be answered by policymakers:

  • What is the rationale for requiring workers’ compensation payers to pay more to hospital outpatient departments than group health insurers pay for the same treatments?
  • If there is such rationale for higher payment, is a large price differential necessary to get hospital outpatient departments to treat injured workers?

In addressing the first question, let’s say that the hospital outpatient department provided identical treatment for a group health patient and a workers’ compensation patient. If the care was identical—same facilities, supplies and staff—and workers’ compensation imposed no unique added costs on the hospital outpatient department, then there is little rationale for workers’ compensation payers to pay more than the group health payers.

Healthcare providers often cite a special “hassle factor” in workers’ compensation that does not exist in treating or billing for the group health patient. Common examples of the alleged hassle factor include longer payment delays, higher nonpayment rates (where the compensability was contested or where care given was not deemed appropriate), more paperwork, more missed appointments, lower patient compliance with provider instructions and so on. If these hassles are unique to workers’ compensation patients, then this forms a potential rationale for workers’ compensation paying higher prices than group health, for the same care. Let’s assume that this accurately describes the real world.

Then the question becomes: Are the unique costs imposed on hospital outpatient departments large enough to justify workers’ compensation payers having to pay $2,000-$4,000 more per surgical episode than group health payers pay for the same care? If the costs of these hassles total less than, say, $2,000, then workers’ compensation fee schedules could be lowered without adverse effects on access to care for injured workers. In other words, the large price differentials observed in this study can only be justified by the large costs of these hassles that are unique to workers’ compensation.

In applying this framework to different types of providers, where these hassles exist, some types will be larger for some kinds of providers than for others. For example, the first doctor who treats may be more exposed to nonpayment risk than other providers who treat later in the claim; or the hospital outpatient departments’ use of the operating and recovery rooms would be less affected by paperwork but exposed to payment delays. Because the majority of payments to hospital outpatient departments are for physical facilities (e.g., recovery room), equipment (e.g., the MRI machine but not the radiologists’ professional services) and supplies (e.g., crutches), it is more likely that hospital outpatient departments are more exposed to billing delays, nonpayment risk (at emergency rooms for initial care) or canceled appointments and less exposed to time-consuming paperwork hassles or patient compliance issues.

Moreover, if the additional burden that the workers’ compensation system places on hospital providers (e.g., additional paperwork, delays and uncertainty in reimbursements, formal adjudication and special focus on timely return to work) is sizable, policymakers have two choices. The first is to adopt a higher-than-typical fee schedule that embraces large costs for the hassle factor. The alternative is to identify and remediate the causes of the larger-than-typical hassles -- especially where these are rooted in statutory or regulatory requirements.

The Montana Approach

The major limitation of group health as a benchmark for workers’ compensation is that the group health rates are the proprietary competitive information of commercial insurers. The Montana legislature found a way to use group health prices as a benchmark for its workers’ compensation fee schedule while respecting the confidentiality of the commercial insurers’ price information. The approach used is to obtain the price information (conversion factor) from each of the five largest commercial insurers and group health third-party administrators (TPAs) in the state and compute an average. The average masks the prices paid by any individual commercial insurer or TPA. In addition, the statute guarantees the confidentiality of the individual insurers’ information.

Conclusion

This study raises a number of concerns about whether fee schedules are too high or too low. There are two key pieces of information needed to address this -- (1) how much other payers in the state are paying, and (2) whether there is a unique workers’ compensation hassle factor.

This study addresses the first question for common surgeries done at hospital outpatient departments. A related WCRI study does the same for professional fees paid to surgeons and primary-care physicians.

Quantifying the presence and magnitude of any unique workers’ compensation hassle factor remains to be done. However, in some states, these studies show that workers’ compensation prices were below those paid by group health. For those states, policymakers may want to inquire about access-to-care concerns, especially for primary care. For other states, the workers’ compensation prices paid were so much higher than prices paid by group health insurers that policymakers should ask if the large differences are really necessary to ensure quality care to injured workers.

One way of framing that question using the results of the WCRI studies is as follows: “Workers’ compensation pays $10,000 to hospital outpatient departments for a shoulder surgery on an injured worker, and group health pays $6,000 for the same services. Does it make sense that if workers’ compensation paid $9,000 that hospital outpatient departments would no longer treat injured workers—preferring to treat group health patients at $6,000, or Medicare patients at a fraction of the group health price, or Medicaid patients at prices lower than Medicare?”

Ms. Tanabe is sharing this article on behalf of its authors, Richard Victor and Olesya Fomenko.

Ramona Tanabe

Profile picture for user RamonaTanabe

Ramona Tanabe

Ramona Tanabe is executive vice president and counsel at the Workers Compensation Research Institute in Cambridge, MA. Tanabe oversees the data collection and analysis efforts for numerous research projects, including the CompScope Multistate Benchmarks.

MRIs: Part of the Solution, or Problem?

The extent to which early MRIs contribute to the perception of disability has yet to be fully quantified but appears to be significant.

Another study sponsored by Liberty Mutual concludes that early magnetic resonance imaging for diagnosis of back pain leads to higher costs and poorer outcomes. The study, published in the August issue of the medical journal Spine, showed that when back pain patients went through MRI scans within the first month after injury, they were between 18 to 55 times as likely as the reference group to receive more diagnostic and invasive procedures. Glenn Pransky, a co-author of the study and director of the Liberty Mutual Center for Disability Research, said that MRIs can put patients in a mindset of trying to find a specific problem in their back and then seeking to fix it. “People get hung up on thinking, ‘Oh, I’ve got this ruptured disc. That must be the problem. I won’t be well until somebody fixes that ruptured disc,’” Pransky said. As many of us know, herniated discs and other spinal "abnormalities" are actually quite common. Pain is complex, and the cause of pain is often illusive. In an Aug. 20 webinar from managed care company Paradigm Outcomes, two physicians pointed out that pain can come from many places. "When you look at somebody’s pain, they have the pain sensation -- there could be nerve pain, there could be soft tissue-muscle-tendon pain," said Steven Moskowitz, senior medical director of Paradigm’s pain program. "They could have pain because they’re deconditioned and out of shape and stiff, and so it hurts to be stiff and to move when you’re stiff. And then they can have. . .  emotional components."
Bowzer's pain started bending over for his cigar.
  In his most recent book, Living Abled and Healthy, Christopher Brigham, MD, no stranger to workers' compensation and lead editor to the AMA 6th Ed. Guide for Rating Permanent Disability, examines people who have had catastrophic injuries or who grew up "less than able" but overcame these difficulties, and compares them with folks who can't seem to surmount such obstacles. [Disclosure -- Brigham is a friend, and I contributed a small part to the book.] Brigham argues that our mind-body connections are surprisingly strong and that people in general discount the effect our emotions, psychology, feelings and perceptions have on our physical being. "If we believe something is helping us we will likely feel better," Brigham says. "If we believe something is hurting us, we will likely feel worse. Our attitudes define who we are, and the choices we make determine our destinies." Robert Aurbach, an attorney, researcher and international workers' comp expert now consulting in Australia, has noted that neuroplasticity -- the brain's ability to reorganize itself by forming new neural connections -- can play a big role in one's perception of ability versus disability. Essentially, continued "training" to be disabled, rather than abled, forms neural connections that reinforce negative associations with pain. The extent to which early MRIs contribute to the perception and emotion of disability has yet to be fully quantified, but the Liberty Mutual study suggests the connection it is not insignificant. According to a 2013 report from the Bureau of Labor Statistics, sprains, strains and tears made up 38% of work-related injuries in 2012, making those the most common source of claims. In that category, the back was the most-often injured body part, making up 36% of sprains, strains and tears. Essentially that means that 1/6th of all work injury claims are related to back pain. How many of those end up worse because of diagnosis and treatment fostered by early MRI findings and might have otherwise been adequately (and perhaps more effectively and efficiently) treated conservatively isn't known, but I suspect the number is considerable. The authors of the Liberty Mutual study found that MRI use for patients with lower back pain wasn’t distributed evenly across the U.S., and they hope to continue the study to determine whether certain states are more prone to improper use of the scans. I think it would also be interesting and beneficial to correlate that study with information about disability rates; my guess is that we (the grand collective "we") make people more disabled than they otherwise would be in our zeal to use medical technology and attempt to find easy answers to complex problems, like pain and disability.

David DePaolo

Profile picture for user DavidDePaolo

David DePaolo

David DePaolo is the president and CEO of WorkCompCentral, a workers' compensation industry specialty news and education publication he founded in 1999. DePaolo directs a staff of 30 in the daily publication of industry news across the country, including politics, legal decisions, medical information and business news.

Translating Business Logic Into Code

The traditional process needs to be reversed: The business side can't simply hand off requirements to programmers any more.

sixthings
Imagine a science-fiction novel involving a computer that becomes independent of humans. It runs their affairs and takes care of their lives, but few, if any, know its inner workings.  But that isn't science fiction. There are lots of business systems that no one fully understands, designed by people who no longer maintain it, encompassing business logic that was dictated by people who are long gone. In both scenarios, we are at the mercy of a computer. In the first one, Arnold Schwarzenegger saves us. In the second scenario, a humanoid robot won't do. Enterprise software development comprises understanding, documenting and implementing business logic, which is the human process the software is supposed to automate. And yet, one of the often neglected links in the chain of skills that software developers possess is the business side. Understanding of the business is necessary to keep the underlying software connected and to stop it from becoming an isolated, unreachable island over time. This skill does not come naturally or derive inevitably from a technical background. Despite the name, "There are few things that are less logical than business logic," software guru Martin Fowler writes. Business logic lacks the determinism of functions and conditional paths and the clear-cut rules of Boolean logic. Business logic is the creation of sales and marketing, not mathematicians and software engineers. A "hybrid-professional" is needed: someone who knows the business and the technology. The benefit of this approach may not be obvious. After all, division of labor exists for a reason, and specialization is due to the limited capacity of individuals and time available to them. Without specialization, major human endeavors wouldn't have been possible. There are, however, certain scenarios in which strict specialization is more of a barrier to progress than a facilitator. Enterprise software is an example. On the surface, the delineation seems natural. The business people know what the business logic is, and they deliver it to the developers in the form of requirements. The developers then translate the requirements to software. The weakness of this approach is the direction of the translation. Distilling the business process to a set of requirements by a non-developer necessarily deprives the developers of the big picture, so they will write software based on the pinhole view given to them. It's like translating text from a foreign language. The message, more or less, can be conveyed if you translate word by word, but to fully appreciate the original content you have to understand the original language in its cultural context. So developers need to understand business language and directly engage the business side. This approach seems to be merely a reversal of direction. Instead of having the business side deliver the requirements to the technical side, we would be doing the opposite. How is that better? It is better because the end product lies at the technical end, not the business end. We're trying to build software to accommodate the business, not the opposite. That dictates the direction, and it makes all the difference. Business people excel at business, and they do it without worrying about how their decisions will translate to software. It is best to free them from that worry -- unless the business itself is software! Software developers have to worry about the software they create. Because one of the two groups has to be burdened with both sides of the equation, the latter is the natural candidate. So the ideal hybrid-professional is one with solid roots on the technology side and the skill to venture into the business side and obtain insight into the specific business domain she is developing for. That skill is made up of people skills that complement the "machine skills"; the ability to compartmentalize technology so it doesn't pollute or dictate the business model; and having true insight, not just knowledge, into the business domain, so that there is never anything lost in translation.

Kal Nasser

Profile picture for user KalNasser

Kal Nasser

Kal Nasser is a software developer, until recently with X by 2, a technology consulting firm in Farmington Hills, Mich., that specializes in IT transformation projects for the insurance industry. Its hands-on experts provide planning, architecture, leadership, turnaround and implementation services

5 Practical Steps to Get to Self-Service

Many insurance carriers are finding they have to overcome decades of information neglect.

To participate in the new world of customer self-service and straight-through processing, many insurance carriers find themselves having to deal with decades of information neglect. As insurers take on the arduous task of moving from a legacy to a modernized information architecture and platform, they face many challenges. I'll outline some of the common themes and challenges, possible categories of solutions and practical steps that can be taken to move forward. Let's consider the case of Prototypical Insurance Company (PICO), a mid-market, multiline property/casualty and life insurance carrier, with regional operations. PICO takes in $700 million in direct written premiums from 600,000 active policies and contracts. PICO's customers want to go online to answer basic questions, such as "what's my deductible?"; "when is my payment due?"; "when is my policy up for renewal?"; and "what’s the status of my claim?” They also want to be able to request policy changes, view and pay their bills online and report claims. After hearing much clamoring, PICO embarks on an initiative to offer these basic self-service capabilities. As a first step, PICO reviews its systems landscape. The results are not encouraging. PICO finds four key challenges. 1. Customer data is fragmented across multiple source systems. Historically, PICO has been using several policy-centric systems, each catering to a particular line of business or family of products. There are separate policy administration systems for auto, home and life. Each system holds its own notion of the policyholder. This makes developing a unified customer-centric view extremely difficult. The situation is further complicated because the level and amount of detail captured in each system is incongruent. For example, the auto policy system has lots of details about vehicles and some details about drivers, while the home system has very little information about the people but a lot of details about the home. Thus, choices for key fields that can be used to match people in one system with another are very limited. 2. Data formats across systems are inconsistent. PICO has been operating with systems from multiple vendors. Each vendor has chosen to implement a custom data representation, some of which are proprietary. To respond to evolving business needs, PICO has had to customize its systems over the years. This has led to a dilution of the meaning and usage of data fields: The same field represents different data, depending on the context. 3. Data is lacking in quality. PICO has business units that are organized by line of business. Each unit holds expertise in a specific product line and operates fairly autonomously. This has resulted in different practices when it comes to data entry. The data models from decades-old systems weren’t designed to handle today's business needs. To get around that, PICO has used creative solutions. While this creativity has brought several points of flexibility in dealing with an evolving business landscape, it's at the cost of increased data entropy. 4. Systems are only available in defined windows during the day, not 24/7. Many of PICO's core systems are batch-oriented. This means that updates made throughout the day are not available in the system until after-hours batch processing has completed. Furthermore, while the after-hours batch processing is taking place, the systems are not available, neither for querying nor for accepting transactions. Another aspect affecting availability is the closed nature of the systems. Consider the life policy administration system. While it can calculate cash values, loan amounts, accrued interest and other time-sensitive quantities, it doesn't offer these capabilities through any programmatic application interface that an external system could use to access these results. These challenges will sound familiar to many mid-market insurance carriers, but they’re opportunities in disguise. The opportunity to bring to bear proven and established patterns of solutions is there for the taking. FOUR SOLUTION PATTERNS There are four solution patterns that are commonly used to meet these challenges: 1) establishing a service-oriented architecture; 2) leveraging a data warehouse; 3) modernizing core systems; and 4) instituting a data management program. The particular solution a carrier pursues will ultimately depend on its individual context. 1. Service-oriented architecture SOA consists of independent, message-based, contract-driven and, possibly, asynchronous services that collaborate. Creating such an architecture in a landscape of disparate systems requires defining:
  • Services that are meaningful to the business: for instance, customer, policy, billing, claim, etc.
  • Common formats to represent business data entities.
  • Messages and message formats that represent business transactions (operations on business data).
  • Contracts that guide interactions between the business services.
Organizations such as Object Management Group and ACORD have made a lot of headway toward offering industry-standard message formats and data models. After completing the initial groundwork, the next step is to enable existing systems to exchange defined messages and respond to them in accordance with the defined contracts. Simple as it might sound, this so-called service-enablement of existing systems is often not a straightforward step. Success here is heavily dependent on how well the technologies behind the existing systems lend themselves to service enablement. An upfront assessment would be entirely warranted. Assuming service enablement is possible, we’re still not in the clear. SOA only helps address issues of data format inconsistencies and data fragmentation. It will not help with issues of data quality and can offer only limited reprieve from unavailability of systems. Unless those can be addressed in concert, this approach will only provide limited success. 2. Data warehouse A data warehouse is a data store that accumulates data from a wide range of sources within an organization and is ultimately used to guide decision-making. While using a data warehouse as the basis of an operational system (such as customer self-service) is a choice, it is really a false choice for a couple of different reasons.
    • Building a data warehouse is a big effort. Insurers usually can’t wait for its completion. They have to move ahead with self-service now.
    • Data warehouses are meant to power business intelligence, not operational systems. If the warehouse already exists, there’s a 50% chance that it was built on a dimensional model. A dimensional model does not lend itself to serving as a source for downstream operational systems. On the other hand, if it’s a “single version of truth” warehouse, the company is well on its way to addressing the data challenges under discussion.
3. Modernizing core systems Modern systems make self-service relatively simple. However, unless modernization is already well underway, it, too, cannot be waited for, because implementation timeframes are so long. 4. Instituting a data management program A data management program is a solution that deals with specific data challenges, not the foundational reasons behind those challenges. To overcome the four challenges mentioned at the beginning of the article, a program could consist of a consolidated data repository implemented using a canonical data model on top of a highly available systems architecture leveraging data quality tools at key junctions. Implementing such a program would be much quicker than the previous three options. Furthermore, it can serve as an intermediate step toward each of the previous three options. As an intermediate step, it has a risk-mitigation quality that’s particularly appealing to mid-sized organizations. The particular solution a carrier pursues will ultimately depend on its individual context. In the final part of this series, we’ll discuss practical steps that a carrier can take towards instituting its own data management program. PRACTICAL STEPS Here are the practical steps that a carrier can take toward instituting its own data management program that can successfully support customer self-service. The program should have the following five characteristics: 1. A consolidated data repository The antidote to data fragmentation is a single repository that consolidates data from all systems that are a primary source of customer data. For the typical carrier, this will include systems for quoting, policy administration, CRM, billing and claims. A consolidated repository results in a replicated copy of data, which is a typical allergy of traditional insurance IT departments. Managing the data replication through defined ETL processes will often preempt the symptoms of such an allergy. 2. A canonical data model To address inconsistencies in data formats used within the primary systems, the consolidated data repository must use a canonical data model. All data feeding into the repository must conform to this model. To develop the data model pragmatically, simultaneously using both a top-down and a bottom-up approach will provide the right balance between theory and practice. Industry-standard data models developed by organizations such as the Object Management Group and ACORD will serve as a good starting point for the top-down analysis. The bottom-up analysis can start from existing source system data sets. 3. "Operational Data Store" mindset -- a Jedi mind trick Modern operational systems often use an ODS to expose their data for downstream usage. The typical motivation for this is to eliminate (negative) performance impacts of external querying while still allowing external querying of data in an operational (as opposed to analytical) format. Advertising the consolidated data repository built with a canonical data model as an ODS will shift the organizational view of the repository from one of a single-system database to that of an enterprise asset that can be leveraged for additional operational needs. This is the data management program’s equivalent of a Jedi mind trick! 4. 24/7/365 availability To adequately position the data repository as an enterprise asset, it must be highly available. For traditional insurance IT departments, 24/7/365 availability might be a new paradigm. Successful implementations will require adoption of patterns for high availability at multiple levels. At the infrastructure level, useful patterns would include clustering for fail-over, mirrored disks, data replication, load balancing, redundancy, etc. At the SDLC level, techniques such as continuous integration, automated and hot deployments, automated test suites, etc. will prove to be necessary. At the integration architecture level (for systems needing access to data in the consolidated repository), patterns such as asynchronicity, loose coupling, caching, etc., will need to be followed. 5. Encryption of sensitive data Once data from multiple systems is consolidated into a single repository, the impact of a potential breach in security will be amplified several-fold – and breaches will happen; it’s only a matter of time, be they internal or external, innocent or malicious. To mitigate some of that risk, it’s worthwhile to invest in infrastructure level encryption (options are available in each of the storage, database and data access layers) of, at a minimum, sensitive data. A successful data management program spans several IT disciplines. To ensure coherency across all of them, oversight from a versatile architect capable of conceiving infrastructure, data and integration architectures will prove invaluable.

Samir Ahmed

Profile picture for user SamirAhmed

Samir Ahmed

Samir Ahmed is an architect with X by 2, a technology consulting company in Farmington Hills, MI, specializing in software, data architecture and transformation projects for the insurance industry. He received a BSE in computer science and computer engineering from the University of Michigan.

Cyber's Surprising Importance for M&A

Many corporate deals can unwittingly void important cyber coverage. So, it needs to be considered early in any possible deals.

Although many people think of cyber insurance when confronted with a data breach, cyber insurance may not be quite so top of mind in the context of corporate mergers and acquisitions. Cyber insurance should be, because policies typically contain provisions that are directly affected by such transactions. Enterprises should take a close look at their cyber insurance policy provisions early on in the deal-making process so that coverage for the affected enterprises can be maximized. The focus on cyber should be especially acute now, both because M&A activity continues to rise and because the importance of cyber coverage is surging on the heels of recent, headline-making data breaches. Cyber insurance policies, like most other policies, typically provide coverage to the named insured identified in the policy, as well as to any subsidiary of the named insured that was created by the date the policy took effect. Carriers generally ask enterprises to identify all such subsidiaries during the application process. Although disclosed subsidiaries may generally be considered "insureds" at the time cyber policies are issued, cyber policies may contain provisions that specify the steps the insured must take to obtain coverage for subsidiaries acquired or created, or for entities involved in mergers or consolidations. Insureds that are considering mergers or acquisitions should ensure compliance by carefully reviewing their cyber insurance policies early in the transaction process. Relevant provisions might be found in various places in cyber policies, including within the policy's conditions, definitions and exclusions. Mergers and newly acquired or created subsidiaries The steps an insured must take to secure coverage for a newly acquired subsidiary vary from policy to policy and may depend on the financials of the subsidiary. For example, under one cyber policy, if the acquired entity has revenue greater than 10% of the named insured's total annual revenue, the named insured must: provide written notice before the acquisition, obtain the insurer's written consent and agree to pay any additional premium required by the insurer. Another insurer requires an Insured that merges with, acquires or creates an entity with assets exceeding 10% of the total assets of the insured to provide full details of the transaction as soon as practicable The insurer is entitled to impose additional terms, conditions and premiums, at its sole discretion. Under the terms of a different policy, if the named insured acquires or creates another organization in which the named insured has an ownership interest of greater than 50%, the organization is covered for insured events that take place after the date of acquisition or creation, but only if the named insured provided notice to the insurer no later than 60 days after the effective date of the acquisition of creation, along with any information the insurer should require. The insured may be exempted from that process if, among other things, the new subsidiary's gross revenues are 10% or less than those of the named insured. Relevant terms are implicated under another cyber policy if the insured acquires or creates an entity that becomes a subsidiary, acquires an entity by merger or purchases assets or assumes liabilities of an entity without acquiring the entity. If the total assets of the acquired or created entity, or the combined total amount of the purchased assets or assumed liabilities, are less than 30% of the consolidated assets of the insured, the new entity may be entitled to certain coverages under the policy if the named insured provides written notice as soon as practicable, but in no event later than 60 days after the effective date of the transaction. The named insured will have to provide any requested information and may be subject to an increased premium. A different insurer requires the named insured to provide notice of a newly formed or acquired subsidiary within 60 days of the transaction if the named insured has more than 50% of the legal or beneficial interest of the entity. If, however, the total assets or total revenues of the new entity exceed 15% of the total assets or revenues of the named insured, the named insured must provide the “full particulars” of the new entity, and the insurer must agree in writing to provide coverage. The insurer may charge an increased premium and amend policy terms. Divested entities and changes in ownership Provisions of cyber policies also may be affected by changes affecting entities that initially are covered under the policy. For example, policies may provide that if the named insured’s legal or beneficial interest in a subsidiary becomes less than 50%, the entity will no longer qualify as a subsidiary under the policy and will lose coverage. Cyber policies also may contain provisions that will be triggered in the event of a takeover of the named insured. Conclusion Corporate transactions may have important effects on the coverage provided under a cyber insurance policy. Because there are no standard-form cyber policies, the provisions that might be implicated by any such transaction, including important notice requirements, will vary from policy to policy.  Entities should carefully review their coverage at the very outset of the deal-making process to ensure that they full understand their rights and obligations and comply with all policy provisions so that coverage can be maximized.

Judy Selby

Profile picture for user JudySelby

Judy Selby

Judy Selby is a principal with Judy Selby Consulting LLC and a senior advisor with Hanover Stone Partners LLC. She provides strategic advice to companies and corporate boards concerning insurance, cyber risk mitigation and compliance, with a particular focus on cyber insurance.

Are Your Separation Agreements Unlawful?

An EEOC suit challenges companies' right to use agreements to keep employers from filing charges or cooperating with investigations.

Employers entering into separation agreements (also called “settlement” or “severance” agreements) has become commonplace. By way of these agreements, employers generally provide a monetary benefit to outgoing employees, or employees who have asserted claims. In exchange, the employees waive certain legal rights to which they otherwise may have been entitled. Some of the most common provisions that bind employees are:
  • confidentiality (prohibiting the employee from disclosing the amount of severance money received, and other terms)
  • non-disparagement (prohibiting the employee from making unfavorable comments about the employer)
  • releases (the employee forever agrees not to file claims against the employer)
  • cooperation (the employee agrees to notify the employer if she receives information about an investigation or claim against the employer).
In light of a recent complaint filed by the U.S. Equal Employment Opportunity Commission (EEOC), the legality and enforceability of existing signed separation agreements could be subject to challenge. The EEOC recently filed a lawsuit against CVS, a national provider of prescriptions and health-related services, in a federal district court. The EEOC alleges that CVS entered into more than 650 unlawful separation agreements with employees. Specifically, the EEOC alleges that the separation agreements, which contained the common provisions described above, unlawfully made severance pay depend on:
  • prohibiting the employees from filing charges at the EEOC
  • interfering with the employees’ ability to cooperate with investigations by the EEOC and other federal agencies.
According to the EEOC’s complaint, the separation agreements violate Title VII of the Civil Rights Act of 1964. The lawsuit is pending, and the federal district court has not issued any ruling on the merits. Nevertheless, in light of the EEOC’s complaint, employers should be mindful of existing and future separation agreements and should review such agreements with their employment counsel to ensure that they comply with the law.

Laura Zaroski

Profile picture for user LauraZaroski

Laura Zaroski

Laura Zaroski is the vice president of management and employment practices liability at Socius Insurance Services. As an attorney with expertise in employment practices liability insurance, in addition to her role as a producer, Zaroski acts as a resource with respect to Socius' employment practices liability book of business.

Today's Digital Customer: It's Me

I've had to send faxes, repeat my member ID over and over in the same conversation and not know how a policy change will affect my bill. Not good.

I’m a child of the '80s --  to be specific, 1983. Some might say I'm a Generation Xer; some might say I'm from Generation Y; others might describe me as a Millennial. Regardless of what you call me, if you're going to sell me something, it better involve technology. You see, my life revolves around technology. I grew up with computers, just as a previous generation grew up with TV. I use a branchless bank. I stream my television content. I use social media to communicate with my friends. I don't like paper. Technology makes my life easier. So you can imagine, as I looked at career possibilities, I never saw myself working for the insurance industry. I hardly knew anything about it, actually, except that it didn't seem like an industry that was very innovative or technologically advanced. I remembered:
  • sitting at my insurance agent’s desk watching him use an application that looked like it belonged on the Oregon Trail.
  • having to send faxes to make policy changes.
  • being directed to call the insurer’s corporate headquarters to file a claim, and in the process repeat my member ID over and over during the same conversation.
  • not being able to know the impact that a change to my policy would have on my bill.
I've watched over the last several years as an entire industry has reevaluated itself and rethought how it does business and markets itself -- a member of the next generation of consumers. But there’s still a long way to go. I’d like to see:
  • tremendous investment in modern technology and the delivery of useful, self-service capabilities.
  • companies embracing more forward-thinking mobile and social media trends, meeting customers like me where we are.
  • Investigation and implementation of innovative technologies involving telematics and other tools for consumers.
  • a more intimate relationship between customers and the carrier, which will leverage advancements in analytics, business intelligence and predictive modeling.
  • the industry be able to attract young, top IT talent so insurers can continue to innovate.
For me and my generation, these will be welcome developments for a couple of reasons. First, we’re digital natives. There aren’t too many facets of our lives that haven’t gone electronic. For me, my church-giving and insurance may be all that remains offline. Second, now that the industry has begun to reverse course and is upping its technology game, my generation has another employment option, which we most likely would not have considered otherwise.  No, it’s not true that we all want to work at Apple or Google, but we do want to invest our considerable talents in an industry that has interesting problems to solve and, more importantly, an environment that shares our enthusiasm and trust for technology. Although insurance has been a bit slow on the uptake, it’s truly gratifying to see an entire industry take my generation seriously, incorporate our needs into overall strategies accommodate our lifestyles and view us as something worth investing in. I look forward to watching technology shape insurance innovation. Who knows—maybe this is the year experiments like usage-based insurance will become a reality. The battle for the hearts of my generation is on. Only the tech-savvy carriers and agents will triumph.

David Ollila

Profile picture for user DavidOllila

David Ollila

David Ollila is responsible for client development at X by 2, an application and data architecture consultancy, in Farmington Hills, MI, specializing in insurance technology transformation and modernization.