Download

TRIA Non-Renewal: Effect on Work Comp?

Likely not much. The workers' compensation marketplace has already adapted to the absence of the Terrorism Risk Insurance Act.

I was greeted this morning with the news that Congress failed to act on the Terrorism Risk Insurance Act (TRIA) before adjourning for the year. This means TRIA will be allowed to expire at the end of the year. This was a big surprise, as most felt the House would be the reason for TRIA's not getting extended. Last week, the House passed a TRIA extension bill, but it was the Senate that ultimately failed to take up a vote on the issue. Why did this happen? Unfortunately, Congress has a habit of tacking unrelated riders onto bills with the hope of getting these issues passed. In this case, the House added amendments to NARAB II legislation, which has to do with licensing of insurance agents and brokers. Some in the Senate were not comfortable with those amendments, which kept the Senate from approving the House bill on TRIA. So what happens now with TRIA? The new Congress will reconvene on Jan. 6, 2015, and the expectation is it will take up TRIA. However, given what just happened, you cannot assume the new Congress will pass a TRIA bill, and, even if it does, the bill may look substantially different than what was on the table. What does this mean to the workers’ compensation industry? We have already seen the reaction from the marketplace. Back in February 2014, carriers started issuing policies that contemplated coverage without the TRIA backstops. We saw some carriers pull back from certain geographic locations, most notably in New York City, and particularly in Manhattan. We also saw some carriers change the terms of their policies and only bind coverage through the end of the year, giving themselves the flexibility to renegotiate terms or terminate coverage if TRIA did not renew. There were legitimate concerns that the workers’ comp marketplace in New York City would be in chaos by the fourth quarter of 2014 as brokers scrambled to place coverage beyond Jan. 1, 2015. The New York State Insurance Fund was in the middle of these discussions, as it faced the prospect of having to provide coverage for employers if the private marketplace did not respond. As the year progressed, something else happened. The marketplace responded. While some carriers pulled back in certain geographic locations, others stepped up to take their place. While some carriers tied their policy expiration to the expiration of TRIA, other carriers did not. Ultimately, employers were still able to obtain workers’ compensation coverage in the private marketplace. What does this mean going forward? There may still be some policies out there that have endorsements allowing the carrier to cancel or renegotiate terms if TRIA expires, but I do not get the impression that this is a widespread issue. Because workers’ compensation is statutory, and carriers cannot exclude for cause, there cannot be terrorism risk exclusions on a workers’ compensation policy. The carrier’s only choice is to provide coverage or decline the risk. While this may not hold true for other lines of coverage, the workers’ compensation marketplace has adapted to the absence of TRIA. Carriers are likely paying more attention to their geographic concentration of exposures, which means employers will have fewer choices, and may see higher pricing. But, at the end of the day, employers should be able to obtain workers’ compensation coverage without the TRIA backstop in place.

What Employees Want for Christmas

Hint: The answer isn't a gift card for designer coffee, an outing with laser tag, team building and pizza or even a new title.

‘Tis the season… when leaders everywhere scramble to find the perfect holiday gift for their staffs. This year, will it be:
  • The latest business title?
  • A gift card for designer coffee?
  • An outing featuring laser tag, team building and pizza?
Perhaps you’d like to do something entirely different.  Why not give employees something they really want this year – a gift that will keep giving long after the egg nog is gone? Consider something from my Holiday Gift Guide for leaders who want to delight employees and deliver results. 1.  Encourage career development. Employee Delight:     Price: $0 According to recent research conducted by Aon Hewitt, 91% of all employees report that career development is among their top priorities. Yet, in engagement survey after engagement survey, managers consistently earn their lowest marks in this area. Imagine your employees’ delight if this holiday season you invested some genuine attention in understanding who they are and what their hopes and dreams are, as well as toward helping them develop plans toward their career goals. (This gift teaches why giving is a good as receiving because, as you grow others, you’ll also deck your own halls with greater capacity and capability.) 2.  Remove roadblocks. Employee Delight:     Price: N/A Forget the visions of sugar plums. What employees really dream about is working without unnecessary obstacles, fire drills or other irritants.  Ask them about what gets in the way of their best work, and you’ll likely be surprised by the struggles and workarounds that are part of their daily routines. Watch employees light up brighter than any holiday decoration if you take even small steps toward clearing the way for them. 3.  Express genuine appreciation. Employee Delight:     Price: Priceless Spread good cheer in the form of recognition and positive feedback. Too frequently, leaders become inadvertent Scrooges, withholding praise and wondering why performance is lackluster and morale is low. Catch people in the act of doing things right. Be on the lookout for contributions -- large and small. “Thank you” doesn’t require fancy wrapping or a bow, yet it’s warmer to the hearts of employees than chestnuts roasting. These Holiday Gift Guide suggestions come with a range of benefits. They’re value-priced to fit any budget. There’s no tax or shipping.  And you can even hope that they’re "re-gifted" as employees find ways to extend the positive practices you model to others. So, with the number of holiday shopping days quickly dwindling, skip the malls, dig deeper -- within yourself, not your wallet -- and experience some real magic this holiday season… and all year long. "Gift me" with your own thoughts!  What do your employees want most? What gifts are you considering this holiday season?

Julie Winkle Giulioni

Profile picture for user JulieWinkleGiulioni

Julie Winkle Giulioni

Julie Winkle Giulioni has spent the past 25 years improving performance through learning. She’s partnered with hundreds of organizations to develop and deploy innovative training products that are in use worldwide. Julie is well-known and well-regarded for her creative, one-of-a-kind solutions that consistently deliver bottom-line results.

How (and Why) to Cancel Group Health

Small and medium-sized businesses can cut costs while providing better coverage to employees through "defined contribution health benefits."

Today, many small businesses are canceling group health insurance coverage, and it's not because they don't want to offer employee health benefits. Businesses are canceling group health coverage because of cost, participation and administrative hassle -- or simply because their employees can get cheaper and better coverage on the individual health insurance exchanges.cancel_group_health_coverage This shift in small business health insurance is happening now and will only accelerate in years to come. For small business owners and HR professionals, this brings up both uncertainty and big relief. When a business cancels group health coverage, employees not only lose their health insurance but also lose the tax advantages associated with employer-based premium contributions. The change can be hard for employee morale and retention. At least, this was the case in the past. Now, most employees are better off purchasing individual health insurance and receiving reimbursement to cover a portion of their out-of-pocket premium cost. This type of approach is called a defined contribution health benefit. As more and more small and medium-sized businesses cancel group health coverage, this is the emerging way to offer a formal health benefit without the cost and complication of group health coverage. Here are three steps to cancel group health coverage while offering the same (or better) health coverage to employees. Step 1: Cancel Group Health Coverage When you cancel your group health coverage, you need to call a customer representative with the insurance company. An insurance representative can confirm the steps the company must take to successfully cancel the policy. For instance, some insurance companies may require that a fax or letter be sent confirming the cancellation. Correspondence via email only may result in the company's being obligated to pay for next month’s premium. Your health insurance agent or broker will be able to assist you with the process, but you, the policy holder, need to call directly. Most group health insurance plans are "unilateral contracts." This means that businesses can cancel a group health insurance plan at any point during the year. While most carriers “request” 30 days' notice, this is not always required. Tip: When you cancel group health coverage, you make all employees covered under the plan eligible for a special enrollment period for individual health insurance. By canceling group health coverage, you are also giving eligible employees access to discounts on individual health insurance (via the premium tax credits). Step 2: Establish a Defined Contribution Health Plan Work with your broker or a defined contribution software provider to establish a defined contribution health plan. In setting up the defined contribution health plan, you'll give employees a set monthly amount to spend on their own health insurance policy. Employees can purchase a policy in a state health insurance exchange, or through the private market via a broker, online, etc. Then, employees can use their employer-funded allowance to be reimbursed for qualified health insurance premiums, up to the amount in their balance. To stay compliant, the plan must be formally administered to meet certain requirements of the IRS, HIPAA, ERISA and ACA. For more: How to Set Up a Defined Contribution Health Plan Step 3: Implement the Defined Contribution Health Plan Once you have set up your defined contribution plan, there are five steps to successfully implement the program.
  1. Enroll employees
  2. Educate employees
  3. Provide resources to help employees select a health plan
  4. Plan for reimbursements
  5. Communicate with employees early on, and frequently
As you can see from this list, besides planning for the administration, implementation is all about educating employees. Educate employees on:
  • How defined contribution healthcare works
  • Why the company has decided to offer health benefits in this way (remember, it is better for them, too!)
  • The benefits of individual health insurance and defined contribution such as plan choice, flexibility and cost savings
  • How to purchase individual health insurance for themselves and their family
  • How to request reimbursement and use their defined contribution employee portal
For sample ways to communicate defined contribution to employees, see this guide. These three steps will empower your business to cancel group health coverage and offer employees better health benefits with defined contribution. What questions do you have about how cancel group health coverage, and make employees even happier? Leave a comment below. Originally posted at Zane Benefits.com.  

Christina Merhar

Profile picture for user Christine Merhar

Christina Merhar

Christina Merhar is the managing content editor for Zane Benefits, the leader in individual health insurance reimbursement for small businesses. Since 2006, Zane Benefits has been on a mission to bring the benefits of individual health insurance to business owners and their employees. Christina received her BA from Western Washington University and joined the Zane Benefits team in 2012.

Best Way to Track Customer Experience

While there are wars fought over the three most common metrics, it turns out that the metric isn't what matters.

Many commentators have recently debated the relative merits of customer effort score (CES) vs. net promoter score (NPS). As a leader who remembers the controversy that surrounded NPS when it first came to dominance, I find the debate concerning. I still recall the effort people wasted trying to win the battle against NPS, pointing out its flaws and the lack of academic evidence for it, when we were really looking a gift horse in the mouth because of NPS. I would caution anyone currently worrying about whether CES is the “best metric” to remember the lessons that should have been learnt from “the NPS wars.” For those not so close to the topic of customer experience metrics, although there any many different metrics that could be used to measure the experience your customers’ receive, three dominate the industry. They are customer satisfaction (CSat), NPS and now CES. These measure slightly different things, but are all reporting on ratings given by customers to a single question. Satisfaction captures emotional feeling about interaction with the organization (usually on a five-point scale). NPS captures an attitude following that interaction, i.e. likelihood to recommend, against a 0-10 scale. Detractors (those providing a 0-6 score) are subtracted from promoters (those with 9-10 ratings) to give a net score. CES returns to attitude about the interaction, but rather than asking about satisfaction it seeks to capture how much effort the customer had to put in to achieve what she wanted or needed (again on a five- point scale). The reality, from my experience (excuse the pun), is that none of these metrics is perfect. Each has dangers of misrepresentation or simplification. I agree with Professor Moira Clark of Henley Centre of Customer Management. When we discussed this, we agreed that ideally all three would be captured by an organization. This is because satisfaction, likelihood-to-recommend and effort required are different lenses through which to study what you are getting right or wrong for your customers. That utopia may not be possible for all organizations, depending on volume of transactions and your capability to randomly vary metrics captured and order of asking. But my main learning point from "the NPS wars" over a couple of years is that the metric is not the most important thing here. As the old saying goes, “It’s what you do with it that counts.” After NPS won the war and began to be a required balanced scorecard metric for most CEOs, I learned that this was not a defeat but rather that gift horse. Because NPS had succeeded in capturing the imagination of CEOs, there was funding available to capture learning from this metric more robustly than was previously done for CSat. So, over a year or so, I came to really value the NPS program we implemented. This was mainly because of its granularity (by product and touchpoint) and the “driver questions” that we captured immediately afterward. Together, these provided a richer understanding of what was good or bad in the interaction, enabled prompt response to individual customers and targeted action to implement systemic improvements. Now we appear to be at a similar point with CES, and I want to caution about being drawn into another metric war. There are certainly things that can be improved about the way the proposed CES question is framed (I have found it more useful to reword and capture “how easy was it to…” or “how much effort did you need to put into…”). However, as I hope we all learned with NPS, I would encourage organizations to focus on how you implement any CES program (or enhance your existing NPS program) to maximize learning and the ability to take action. That is where the real value lies. Another tip: Using learning from your existing research, including qualitative, can help frame additional questions to capture following CES. You can then use analytics to identify correlations. Having such robust regular quantitative data capture is much more valuable than being "right" about your lead metric.

Paul Laughlin

Profile picture for user PaulLaughlin

Paul Laughlin

Paul Laughlin is the founder of Laughlin Consultancy, which helps companies generate sustainable value from their customer insight. This includes growing their bottom line, improving customer retention and demonstrating to regulators that they treat customers fairly.

The End of Market Segmentation

Big data will bring three waves of change -- soon -- and they are already showing up in healthcare.

IBM's CEO Ginni Rometty’s speech a year and a half ago giving marketers 18 months to "sink or swim" raised a few eyebrows, and her deadline is upon us. She defined three imminent waves of change: Wave 1: The shift from the market segment to the individual. It spells the death of the marketing to the “average customer.”  With the right data, you can do things like real-time pricing. Wave 2: The evolution from reaching out to customers to creating a “system of engagement” that keeps track of interactions between brand and customer and uses insights at every touch point. Wave 3:  Data is used to personalize interactions and provide the right context. Telling words, indeed, if you are a marketer who hasn't gotten on the big-data bandwagon. For insurers, these words have a chilling relevance. Let's see how these three waves can be adopted by insurers, especially in the healthcare space: 1.  The shift toward individualized marketing is already in full swing, primarily because of healthcare reform, and its main enabler is data -- mostly structured data, because insurers are still not ready to fully embrace big data. The more accurate and timely information an insurer can collect on an individual member's interactions with the healthcare system, the more realistic its performance evaluation by the government. Of course, data-savvy insurers won't have to wait for the government to identify gaps in the delivery of care and will work toward closing them. One doesn't need the foresight of IBM's CEO to predict that these insurers would be in the best position to squeeze competition out of the marketplace. 2.  A system of engagement is also in various stages of existence at various health plans to closely monitor interactions between their members and providers and to manage aspects of the members' health through programs for case management, disease management, etc. Those familiar with care management would readily recognize the role that predictive modeling plays in identifying and prioritizing candidates. We all know that without the right amount and quality of data, such models are useless. In the coming years, the scope of care management is only going to broaden, simply because the industry has shifted from a model of reactive interventions to proactive care management. 3.  The idea of personalizing care more for the individual (or families) has been around for a few decades already. If you look at patient-centered medical-homes, you can see that the industry is already working, albeit slowly, to realize this vision. Ideals such as continuous and integrated care, seamless coordination and communication and constant improvement all rest on the effective use and exchange of information (structured and unstructured) among the various stakeholders. It's safe to conclude that the three waves are already upon us in the healthcare industry. Some insurers will ride them more effectively than others, because they will be able to harvest so much information -- from electronic medical records, lab results, X-rays, MRI results, even genetic data. Setting a hard and fast deadline might be a bit presumptuous, but it isn’t too hard to predict that it won't take more than a few years to transform the role of big data from a luxury to an absolute necessity for them!

Syed Haider

Profile picture for user Syed Haider

Syed Haider

Syed Haider is an architect with X by 2.

He has 20 years of experience as a software engineer and architect. 

He holds a master’s degree in computer science from the University of Michigan and a BCS in computer science from the National University of Computer and Emerging Sciences in Pakistan.

The Myth About Contractors and Risk

Senior executives say, “There is no need to worry about that risk – I have transferred that to the contractor.” This is not possible.

The notion that by outsourcing or contracting you have transferred your risk to another party is a myth. Senior executives, both in government and private enterprise, say, “There is no need to worry about that risk – I have transferred that to the contractor.” This is simply untrue. If you own the consequences (or at least part of them), then you own the risk. For example; In the TV series "Air Crash Investigation," there is an episode titled “Dead Weight.” In this episode, maintenance staff working for a company that is sub-contracted to conduct maintenance on behalf of Air Midwest’s primary maintenance contractor skip nine of 25 steps detailed in the maintenance manual when adjusting the tension on the elevator control cable. As a result, the cable is unable to traverse through its full range of motion. When Air Midwest flight 5481 took off overweight, the center of gravity shifted rearward when the landing gear was raised, which pitched the nose higher. Because of the issues with the elevator control cable, the pilots were unable to bring the nose down. The aircraft stalled and crashed into a hangar on the ground, killing all passengers and crew on board. The issue arose because there was no contract oversight/assurance by either Air Midwest or the primary contractor. A contract is a control -- but a control is only as good as the measurement of its effectiveness. Organizations that outsource simply cannot afford to assume that, because there is a contract in place:
  • They have outsourced the risk to the contractor
  • The contractor’s performance will be as contracted and as reported.
This last point may seem a cynical one, but you need to accept that the primary driver for a contractor is to maximize profit. If shortcuts can be taken, they are likely to be pursued. What is even more important for organizations to understand is that, if the function that is contracted is a compliance requirement, and if there is a compliance breach, it is the organization -- not the contractor -- that will be held to account. So, what are the keys to reducing the outsourcing risks? Firstly, the organization needs to ensure that, before developing the solicitation documentation for an outsourced function, the risks during the contracted period are identified and assessed and treatments (such as oversight and performance measurement) are fully built into the contract. It is absolutely critical that compliance risks with the highest-level consequences are included in this list. Secondly, the organization needs to ensure that contract performance is actively monitored and measured (i.e. do not simply accept contractor’s performance reports as fact). In essence, organizations need to remember that, although you can outsource responsibility for the management of functions, you cannot outsource accountability for the consequences of not managing risk. In simple terms, if the contractor fails, the organization fails. If an organization owns the consequence, it owns the risk. If your organization is one where contact management and contract assurance are not front of mind, or yours is one where the assumption is that the risk has been transferred to the contractor, you are in a dangerous position.

Rod Farrar

Profile picture for user RodFarrar

Rod Farrar

Rod Farrar is an accomplished risk consultant. His knowledge of the risk management domain was initially informed through his 20 years of service as an army officer in varying project, security and operational roles. Subsequent to that, he has spent eight years as a professional risk manager and trainer.

5 Accidents Just Waiting to Happen

To limit workers' comp claims and lawsuits, especially fraudulent ones, you have to avoid five common mistakes.

|
If you’re like any successful business I know, to create sustainable growth and be competitive in your marketplace you must continue to control your operating costs. Among other things, you must limit the number and cost of workers' compensation claims and lawsuits. You’re reading the right article if:
  • You’re frustrated with how claims and lawsuits, especially fraudulent ones, are killing your profitability, and you feel powerless to control them.
  • You know losses are an unfortunate part of business. and you tried to reduce them, but with little or marginal success.
  • You’re so ready to eliminate claims in a predictable way, and you want a system that gets consistent results.
In this article, you’re going to learn:
  • The five costly mistakes leaders make that kill profitability.
  • Why you need to implement systems and get your team on board to reduce risk and increase profitability.
  • How you can gain peace of mind by knowing what your blind spots are that are costing you money.
The Five Costly Mistakes Not knowing your numbers: Tom Peters coined the phrase, “What gets measured gets done.” I find many businesses do not have a handle on the analytics of their claims. More important than just knowing your number of claims is having data on hand that will let you skillfully mitigate the risks that are causing your claims. In other words, do you see trends in the types of claims you are having? Without good analytics, it’s difficult to create a targeted plan to reduce your risks. If you’re not familiar with your numbers, we would suggest you commit to finding out what your numbers are, and the story behind those numbers. Not knowing the real cost of claims: When coaching clients across the country, I’m amazed to learn that many business leaders do not fully understand the financial impact that claims have on their bottom line and top line.  OSHA suggests that the indirect cost of claims can range anywhere from a multiple of 1.1 to 4.5 added to the direct cost of a claim itself. A claim totaling $67,000 multiplied by an indirect loss cost factor of just 1.1 suggests that the indirect loss costs would total $73,700. Adding those two numbers together, the company has sustained total loss costs of $140,700. If you are a company sporting a 9% net profit margin, you’d have to sell $1.6 million in products and services just to break even to pay for that claim. Not knowing your operational blindspots: In the best-selling Executionauthors Larry Bossidy and Ram Charan wrote, “Too many leaders today fool themselves into thinking their businesses are well run”. We’ve all heard the saying, “Sometimes you don’t know what you don’t know.” It’s critically important that companies seek out operational best practices to lower their chances of having claims and lawsuits. That has presented a challenge to many businesses because there are so many independent silos within the insurance industry. Between claims adjusters, loss control representatives, underwriters, medical providers, etc., most businesses feel that these groups of people rarely collaborate to create a holistic best practices platform. When given the opportunity, company leaders want to do the right thing and play by the right rules, but they don’t know what the right rules are. When companies begin deploying industry-endorsed policies and procedures (possibly through products such as my firm's RiskScore), they predictably see reductions in their number and cost of claims. These procedures start with a view of a company’s hiring practices and what policies and procedures they have before and after a claim occurs. Not having systems in place: Dr. George Weathersby is known for his thoughts on systems. He says, “Ordinary people achieve extraordinary results consistently using the system. Extraordinary people (the really smart people), without a system, won’t produce consistent results.” This reality holds true in the insurance world. In my consulting practice, we advise clients to deploy systems that involve multiple layers of management. In football terms, we say “the left tackle and the right tackle, along with the rest of the team, need to know where to go when the play starts.” When you’ve got your team assembled on the field, and everyone is coached on what their responsibilities are in the risk mitigation process, you will see dramatic reduction in your number and cost of claims. It’s important to locate systems (such as our Diamond Risk Reduction System) that can map out checklists and procedures, along with training and claims management tools to streamline this process. Not Taking Action: Take a deep breath, and don’t be overwhelmed by the size and scope of this important topic. A journey of 1,000 miles starts with one step. All you need to know is that there are systems out there that can help you in creating better results.

Rick Dalrymple

Profile picture for user RickDalrymple

Rick Dalrymple

Rick Dalrymple is one of the owners of Insurance Office of America and has been in the business for over 30 years. In just three short years with a leading national insurance carrier, Rick was recognized nationally for his outstanding achievements and is considered by his peers to be in the top five percent of his field. He was named partner of the year in 2005.

Court Takes Practical Approach to SB 863

California appellate judges say the law's changes to certain reviews under workers' comp are NOT retroactive.

The 4th District Court of Appeal has ruled on the “retroactive” application of the independent bill review (IBR) provisions of California's SB 863 and whether the legislature intended to remove from the Workers' Compensation Appeals Board (W.C.A.B.) the jurisdiction to address bill disputes that existed before the law took effect. (Jan. 1, 2013).  In California Insurance Guarantee Association (C.I.G.A.) v W.C.A.B. (Elite Surgical Centers), the court ruled that the legislature did not wrap up pre-existing medical billing disputes into the new IBR process and that the W.C.A.B. continues to have jurisdiction to resolve those disputes. The court also found that the process used by workers' compensation judges (WCJ) and adopted by the W.C.A.B. to determine the appropriate fee in these disputed cases constituted the necessary substantial evidence. The issue in the case involved fees for outpatient surgical center fees for more than 300 cases for treatment provided before Jan. 1, 2004. (The cutoff date is significant as ambulatory surgery centers (ASC) became subject to the official medical fee schedule (OFMS) after that date. Before that date, only hospital-based surgery centers were subject to the OFMS.) Evidence was presented that Elite had increased its rates in November 2000. C.I.G.A. (along with seven other defendants) contested the amount billed and paid the undisputed portion of the bill. The remainder was left for resolution at the W.C.A.B.  At the time this matter came to trial, the W.C.A.B. had consolidated 333 liens, involving different procedures, into a single litigated case. The case was litigated for 17 days. Elite provided its evidence showing the services provided and its customary fees accepted for similar procedures. Defendants presented contrary evidence, to portray the Elite charges as grossly disproportionate to those of other local providers. Defendants also argued that the ASC OFMS that went into effect on Jan. 1, 2004, was the most reasonable and objective method for determining a fee for Elite’s services. Before a decision was issued, the legislature passed SB 863, which became effective on Jan. 1, 2013. On Feb 1, 2013, the WCJ issued his decision awarding specific amounts for each of the different types of services at issue. The amount awarded was not based strictly on the evidence presented by one side or the other but represented a figure midway between the ASC OFMS that became effective on Jan. 1, 2004, and the OFMS for hospital-based surgery centers that was in effect beforehand. The awarded fees were between 22% and 45% (depending on the procedure) of what Elite had presented as its reasonable charges. Defendants appealed from the WCJ’s order, arguing that SB 863 removed the W.C.A.B. jurisdiction to resolve billing disputes and instead required the use of the newly enacted IBR process to resolve the disputed bills. Defendants also argued that the WCJ’s decision was not based on substantial evidence. After initially granting reconsideration, the W.C.A.B. affirmed the WCJ’s decision. Defendants’ petition for writ of review was granted, and the appellate court upheld the W.C.A.B.’s jurisdiction to decide the disputed issues. The court also found that the WCJ’s analysis was based on substantial evidence. In considering the potential application of IBR to the disputes existing as of the time SB 863 became effective, the court took note of section 84 of the statute, which required: "This act shall apply to all pending matters, regardless of date of injury, unless otherwise specified in this act, but shall not be a basis to rescind, alter, amend or reopen any final award of workers' compensation benefits." Defendants’ argued, unsuccessfully, that because there was not another provision dictating when the IBR provisions were to become effective, the provisions applied to all pending matters. The court agreed that at first blush the section appeared to mandate application of IBR to pending matters. The court, however, did not stop at that analysis, noting that a review of the entire framework of the IBR procedure indicated the matter was more complex. The court pointed out the impracticality of applying the new provisions to existing cases because of how the statutory process was set up: “After considering SB 863 as a whole, we conclude that this legislation is ambiguous with respect to whether the IBR process was intended to apply to pending billing disputes, or, rather, was intended to apply only prospectively, to new billing disputes that arise with respect to injuries that occur after the effective date of the legislation. Attempting to apply section 84 of SB 863 in this case would leave these parties without a process by which to have their dispute resolved by a third party, since the new IBR process may be utilized only if certain conditions precedent have been met, and the deadlines for meeting those conditions have passed. Leaving these parties without a viable process to decide their dispute cannot be what the legislature intended. We conclude that in creating the IBR process, the legislature intended to establish a new dispute resolution procedure that would apply to disputes arising on or after the effective date of the legislation, and not to disputes like this one that were pending at the time the legislation went into effect.... "Although this provision does not expressly state that the legislature intended that the IBR and IMR processes go into effect only prospectively, it provides an indication that the legislature viewed both the IMR and IBR processes as applying to future employment-related injuries and to future disputes as to medical care and billing for such care.” Defendants argued that the lack of process for disputes on billing before Jan. 1, 2013, could be addressed by administrative regulation. The court pointed out the administrative director (AD) had already created regulations and that no such process existed. Acknowledging the ambiguity of the statutory language and the practical problems in applying the statutory process where the events precedent to IBR have already passed, the court ruled: “In the face of such ambiguity, we are led to interpret the statute as operating prospectively.  … [statutes ordinarily are interpreted as operating prospectively in the absence of a clear indication of a contrary legislative intent]; see also Myers v. Philip Morris (2002) 28 Cal.4th 828, 841 [when a statute is ambiguous regarding retroactivity, it is construed to be prospective in application]. In construing statutes, there is a presumption against retroactive application unless the legislature plainly has directed otherwise by means of " 'express language of retroactivity or . . . other sources [that] provide a clear and unavoidable implication that the legislature intended retroactive application.' " (McClung v. Employment Development Dept. (2004) 34 Cal.4th 467, 475 (McClung).) Although, at first blush, SB 863 section 84 might appear to constitute " 'express language of retroactivity' " …, it specifically allows for other portions of the statute to provide a different rule regarding retroactive/prospective application, and at least one other provision of the statute, Labor Code section 139.5, suggests that the IBR process was intended to apply only to disputes over medical treatment provided for injuries that occur on or after Jan. 1, 2013.... "Considering these obstacles to applying the new billing review process to pending claims, it is clear that the legislature could not have intended to leave parties who had pending billing disputes on the effective date of the new statutory scheme with no meaningful procedure for resolving their disputes. ” The court also provided an extensive discussion of the WCJ’s analysis in determining the appropriate fee for the services in dispute. The court determined the WCJ properly applied the guidelines required in the Tapia v Skill Master Staffing case including its reliance on Kunz v Patterson Floor Coverings, both W.C.A.B. en banc decisions. “As the WCJ noted, the formula that he used to calculate the "reasonable" facility fees for the relevant time period for the procedures at issue took into consideration what Medicare allowed, what Elite charged, what Elite accepted as payment, what the OMFS for ASCs as of Jan. 1, 2004 allowed, what the OMFS for hospitals during much of the relevant period allowed and the fees that other ASCs billed and accepted for the same or similar services. The WCJ considered evidence as to all of these factors, and arrived at results that fell somewhere in the middle of all of these figures. These conclusions are supported by the evidence and are clearly permissible.” Comments and Conclusions: The court was clearly swayed by the practical issues in attempting to implement the IBR procedure to disputes where the necessary steps to enter the IBR process had long since passed. While defendants and amicus argued the procedural gaps could be addressed by regulation, the court remained unconvinced that the legislature intended the billing dispute process to be restarted and then shoehorned into IBR. While the fact the legislature had defined an implementation timetable for IMR but not IBR may have made it tempting for the court to rule, and defendants to argue, for retroactive application, the practical problems in doing so ultimately carried the day. The court’s rather lengthy discussion and approval of the WCJ’s analysis of how to resolve the facility fee dispute may have broader import in the long run as it may provide a roadmap for how to address similar disputes in existing cases. While most ASC fees in cases from after Jan. 1, 2004, are fairly easily resolved, cases pending for services before that date still exist. Prior cases such as Kunz and Tapia had provided some guidance, but translating those cases into easily applied formulas still poses problems. The WCJ’s discussion and the issues he considered, as well as the objectively based formula, may serve as guidance in pending cases with similar disputes. This does not necessarily mean that all such cases should resolve at the same midway point. Among considerations that both the WCJ and appellate court pointed out as significant was the quality of the facility. Elite presented evidence that its facility was state-of-the-art and provided higher-quality medical technology than other local facilities. One witness seemed to suggest the facility was closer to a hospital-based surgery center than most ASCs. One might therefore view the WCJ’s objective standard as the upper end of the scale for ASC facilities. Surgery centers with more mundane credentials might very well have to settle for a value between the 2004 OMFS  and the WCJ’s formula.

Richard Jacobsmeyer

Profile picture for user richardjacobsmeyer

Richard Jacobsmeyer

Richard (Jake) M. Jacobsmeyer is a partner in the law firm of Shaw, Jacobsmeyer, Crain and Claffey, a statewide workers' compensation defense firm with seven offices in California. A certified specialist in workers' compensation since 1981, he has more than 18 years' experience representing injured workers, employers and insurance carriers before California's Workers' Compensation Appeals Board.

10 Questions Boards Should Be Asking on Risk Management

Many directors know much too little about how to oversee the issue.

Although most boards of directors are aware of risk and the need to manage it, many board members do not actually know much about risk management or how to oversee it. This article reviews a list of questions that may help board members execute their mandate. The list is not comprehensive but is illustrative of important points a board member would want to know about how an organization is managing its risk.
  • Who is responsible for the enterprise risk management or risk management process?
Without assigning someone clear accountability for the process of risk management, it is unlikely that risks would be identified, prioritized and mitigated across an organization on a periodic basis and in a thorough way. In addition, it is unlikely risk would be given the focus that is required to achieve a reasonable degree of control over the many uncertainties facing organizations in today’s highly dynamic marketplace. Less important are such details as the title of the individual with the accountability or how large a budget or staff the individual is provided. A named, accountable person is key to ensuring that a sound process is in operating.
  • What are the most significant risks to the strategy, and what is being done to address these?
Given that failures are generally caused by a strategic risk that has not been addressed rather than by a catastrophic storm or single cyber attack, for example, it is vital for organizations to know and deal with their strategic risks. Strategic risks typically involve aspects of the business such as:
  1. What is the organization’s vision of the future – does it take into account where technology, science and other dynamic forces are going?
  2. What is the mission – what does the organization make or sell, to whom and in which geographies?
  3. What are the goals and objectives – how much does the organization want to grow, at what margins, keeping what capital and debt levels?
  4. What are the values – how does the organization want to behave and be perceived in the marketplace?
  5. What is the position with strategic partners, investors and vendors?
  • Is there a single risk register that collates all significant risks (strategic and non-strategic), with action plans to mitigate them?
Strategic and non-strategic risks of a certain magnitude should be combined into one risk register that allows management and the board to see:
  1. all the major risks
  2. what is being done to mitigate them
  3. what is the progress against the risk mitigation plan
The board should expect to see such a report or ask for one, if it is not already being created.
  • What are the top 10 risks overall?
These should be top of mind for the organization’s senior team at all times and be a familiar topic of discussion with the board. Board members should consider if these make sense based on all the information they have been privy to about the organization.
  • Do individual performance plans include risk management?
If managing risk is really important to the organization, the individual performance plans of a large number of employees at different levels of the organization should include a specific objective or task related to risk management. Thus, the performance against these would be evaluated at regular intervals. It is well-known that what gets measured gets managed, and what gets rewarded gets attention.
  • Who is responsible for information technology security?
Clear accountability for the task of ensuring IT security is also critical. With the risk of cyber breaches, demands for service, extortion and stealing of bank accounts and intellectual property so high, an organization needs to ensure it has the necessary expertise to create a secure technological platform. This can be in the form of hired staff or expert contractors. In the case of some recent, high-profile breaches, it appears that the role of chief information security officer (CISO) was either non-existent or that the individual filling the role was brand new. An inference can be drawn that a seasoned CISO who understood the organization might have made a difference. Of course, having the role filled does not guarantee never having a security risk come to fruition. But it does reduce the risk to some extent, and having a CISO makes the discovery and recovery from a breach or attack quicker and more efficient when one does occur.
  • Do all employees get some information and training on identifying and reporting a risk? Is there a risk reporting “hot-line”?
The answer to this question will give the board insight into several things. If there is a hot-line, it shows that the organization is seriously interested in identifying risks and that the topic of risk is being handled fairly transparently within the organization. If there is not one, the board may wonder why there is no channel for the rank and file to alert management about risks.
  •   Have correlated risks been looked for, and what are they?
Large and small organizations, alike, have the potential to harbor correlated risks. Correlated risks are a group of risks that might occur at the same time because there is a relationship of some sort among them. The aspect at play could be:
  1. a geography in common
  2. a single source with multiple ties. For example, a company that has call centers, data processing and manufacturing plants in a single Southeast Asia country has the potential for correlated risk if that country is hit by a natural catastrophe, political upheaval or some other turbulence.  Another example is, if different product units of a manufacturing company use the same supplier for raw materials or OEM parts, there is the potential for correlated risk if that supplier is unable to deliver on its orders.
A correlation might also be in terms of chain reactions. One risk event may give rise to other risks, which is often true in the case of natural disasters such as earthquakes and hurricanes. A question about correlated risks will not only elicit an answer about those risks but also provide insight as to whether risk is being discussed in depth and across organizational silos.
  • Are a business continuity plan and disaster recovery plan in place?
No matter how robust a risk management process is, a company will experience catastrophes of one sort or another from time to time. There is a need for plans that deal with these because reaction speed is critically important in managing them well. The business continuity plan has the aim of keeping all or some of the business running from another venue or with back-up systems or on-call staff, or whatever allows continuous operations. The disaster recovery plan has the mission to restore normal operations as quickly as possible after the business has been interrupted in whole or in part. In reviewing these plans, key elements to look for include:
  1. a communication hierarchy for notification that is complete and up to date
  2. a decision tree for creating clarity around who can make which decisions
  3. a list of third-party resources that have been previously vetted and can be called in to assist – some will be part of any insurance policies that may be triggered by the risk/loss event.
  • What risks are being transferred by insurance versus what is being mitigated internally, and what is the quality of the insurer?
Insurance can be an effective and efficient way to handle risk when it is used in a well-constructed fashion. The board will want to consider high-level issues such as:
  1. Is the right set of risks covered; i.e. those that are less predictable, require special expertise and are beyond the financial wherewithal of the organization to withstand?
  2. Are the right limits being purchased; i.e. is the value of the policy high enough to truly cover a major loss?
  3. How highly is the insurer rated, and what is its claims service reputation.
A way in which the board can judge the merit of the answers to these questions is to find out:
  1. the kind of analysis that was done to determine the insurance program
  2. who did the analysis
  3. whether there is benchmark information to look at from comparable organizations.
There are, undoubtedly, other questions that the board may need to ask. These are an excellent starting place for getting a sense of how well the organization is addressing risk.

Donna Galer

Profile picture for user DonnaGaler

Donna Galer

Donna Galer is a consultant, author and lecturer. 

She has written three books on ERM: Enterprise Risk Management – Straight To The Point, Enterprise Risk Management – Straight To The Value and Enterprise Risk Management – Straight Talk For Nonprofits, with co-author Al Decker. She is an active contributor to the Insurance Thought Leadership website and other industry publications. In addition, she has given presentations at RIMS, CPCU, PCI (now APCIA) and university events.

Currently, she is an independent consultant on ERM, ESG and strategic planning. She was recently a senior adviser at Hanover Stone Solutions. She served as the chairwoman of the Spencer Educational Foundation from 2006-2010. From 1989 to 2006, she was with Zurich Insurance Group, where she held many positions both in the U.S. and in Switzerland, including: EVP corporate development, global head of investor relations, EVP compliance and governance and regional manager for North America. Her last position at Zurich was executive vice president and chief administrative officer for Zurich’s world-wide general insurance business ($36 Billion GWP), with responsibility for strategic planning and other areas. She began her insurance career at Crum & Forster Insurance.  

She has served on numerous industry and academic boards. Among these are: NC State’s Poole School of Business’ Enterprise Risk Management’s Advisory Board, Illinois State University’s Katie School of Insurance, Spencer Educational Foundation. She won “The Editor’s Choice Award” from the Society of Financial Examiners in 2017 for her co-written articles on KRIs/KPIs and related subjects. She was named among the “Top 100 Insurance Women” by Business Insurance in 2000.

How CAT Models Lead to Soft Prices

This article is the third in a series on how the evolution of catastrophe models provides a foundation for much-needed innovation in insurance.

In our first article in this series, we looked back at an insurance industry reeling from several consecutive natural catastrophes that generated combined insured losses exceeding $30 billion. In the second article, we looked at how, beginning in the mid-1980s, people began developing models that could prevent recurrences of those staggering losses. In this article, we look at how modeling results are being used in the industry.   Insurance is a unique business. In most other businesses, expenses associated with costs of operation are either known or can be fairly estimated. The insurance industry, however, needs to estimate expenses for things that are extremely rare or have never happened before. Things such as the damage to a bridge in New York City from a flood or the theft of a precious heirloom from your home or the fire at a factory, or even Jennifer Lopez injuring her hind side. No other industry has to make so many critical business decisions as blindly as the insurance industry. Even in circumstances in which an insurer can accurately estimate a loss to a single policyholder, without the ability to accurately estimate multiple losses all occurring simultaneously, which is what happens during natural catastrophes, the insurer is still operating blindly. Fortunately, the introduction of CAT models greatly enhances both the insurer’s ability to estimate the expenses (losses) associated with a single policyholder and concurrent claims from a single occurrence. When making decisions about which risks to insure, how much to insure them for and how much premium is required to profitably accept the risk, there are essentially two metrics that can provide the clarity needed to do the job. Whether you are a portfolio manager managing the cumulative risk for a large line of business or an underwriter getting a submission from a broker to insure a factory or an actuary responsible for pricing exposure, what these stakeholders need to minimally know is:
  1. On average, what will potential future losses look like?
  2. On average, what are the reasonable worst case loss scenarios, or the probable maximum loss (PML)?
Those two metrics alone supply enough information for an insurer to make critical business decisions in these key areas:
  • Risk selection
  • Risk-based pricing
  • Capacity allocation
  • Reinsurance program design
Risk Selection Risk selection includes an underwriter's determination of the class (such as preferred, standard or substandard) to which a particular risk is deemed to belong, its acceptance or rejection and (if accepted) the premium. Consider two homes: a $1 million wood frame home and a $1 million brick home both located in Los Angeles. Which home is riskier to the insurer?  Before the advent of catastrophe models, the determination was based on historical data and, essentially, opinion. Insurers could have hired engineers who would have informed them that brick homes are much more susceptible to damage than wood frame homes under earthquake stresses. But it was not until the introduction of the models that insurers could finally quantify how much financial risk they were exposed to. They shockingly discovered that on average brick homes are four times riskier than wood frame homes and are twice as likely to sustain a complete loss (full collapse). This was data not well-known by insurers. Knowing how two or more different risks (or groups of risks) behave at an absolute and relational level provides a foundation to insurers to intelligently set underwriting guidelines, which work toward their strengths and excludes risks they do not or cannot absorb, based on their risk appetite. Risk-Based Pricing Insurance is rapidly becoming more of a commodity, with customers often choosing their insurer purely on the basis of price. As a result, accurate ratemaking has become more important than ever. In fact, a Towers Perrin survey found that 96% of insurers consider sophisticated rating and pricing to be either essential or very important. Multiple factors go into determining premium rates, and, as competition increases, insurers are introducing innovative rate structures. The critical question in ratemaking is: What risk factors or variables are important for predicting the likelihood, frequency and severity of a loss? Although there are many obvious risk factors that affect rates, subtle and non-intuitive relationships can exist among variables that are difficult, if not impossible, to identify without applying more sophisticated analyses. Regarding our example involving the two homes situated in Los Angeles, catastrophe models tell us two very important things: what the premium to cover earthquake loss should roughly be and that the premium for masonry homes should be approximately four times larger than wood frame homes. The concept of absolute and relational pricing using catastrophe models is revolutionary. Many in the industry may balk at our term “revolutionary,” but insurers using the models to establish appropriate price levels for property exposures have a massive advantage over public entities such as the California Earthquake Authority (CEA) and the National Flood Insurance Program (NFIP) that do not adhere to risk-based pricing. The NFIP and CEA, like most quasi-government insurance entities, differ in their pricing from private insurers along multiple dimensions, mostly because of constraints imposed by law. Innovative insurers recognize that there are literally billions of valuable premium dollars at stake for risks for which the CEA, the NFIP and similar programs significantly overcharge – again, because of constraints that forbid them from being competitive. Thus, using average and extreme modeled loss estimates not only ensures that insurers are managing their portfolios effectively, but enables insurers, especially those that tend to have more robust risk appetites, to identify underserved markets and seize valuable market share. From a risk perspective, a return on investment can be calculated via catastrophe models. It is incumbent upon insurers to identify the risks they don’t wish to underwrite as well as answer such questions as: Are wood frame houses less expensive to insure than homes made of joisted masonry? and, What is the relationship between claims severity and a particular home’s loss history? Traditional univariate pricing analysis methodologies are outdated; insurers have turned to multivariate statistical pricing techniques and methodologies to best understand the relationships between multiple risk variables. With that in mind, insurers need to consider other factors, too, such as marketing costs, conversion rates and customer buying behavior, just to name a few, to accurately price risks. Gone are the days when unsophisticated pricing and risk selection methodologies were employed. Innovative insurers today cross industry lines by paying more and more attention to how others manage data and assign value to risk. Capacity Allocation In the (re)insurance industry, (re)insurers only accept risks if those risks are within the capacity limits they have established based on their risk appetites. “Capacity” means the maximum limit of liability offered by an insurer during a defined period. Oftentimes, especially when it comes to natural catastrophe, some risks have a much greater accumulation potential, and that accumulation potential is typically a result of dependencies between individual risks. Take houses and automobiles. A high concentration of those exposure types may very well be affected by the same catastrophic event – whether a hurricane, severe thunderstorm, earthquake, etc. That risk concentration could potentially put a reinsurer (or insurer) in the unenviable position of being overly exposed to a catastrophic single-loss occurrence.  Having a means to adequately control exposure-to-accumulation is critical in the risk management process. Capacity allocation enables companies to allocate valuable risk capacity to specific perils within specific markets and accumulation zones to minimize their exposure, and CAT models allow insurers to measure how capacity is being used and how efficiently it is being deployed. Reinsurance Program Design With the advent of CAT models, insurers now have the ability to simulate different combinations of treaties and programs to find the right fit, maximizing their risk and return. Before CAT models, it would require gut instinct to estimate the probability of attachment of one layer over another or to estimate the average annual losses for a per-risk treaty covering millions of exposures. The models estimate the risk and can calculate the millions of potential claims transactions, which would be nearly impossible to do without computers and simulation. It is now well-known how soft the current reinsurance market is. Alternative capital has been a major driving force, but we consider the maturation of CAT models as having an equally important role in this trend. First, insurers using CAT models to underwrite, price and manage risk can now intelligently present their exposure and effectively defend their position on terms and conditions. Gone are the days when reinsurers would have the upper hand in negotiations; CAT models have leveled the playing field for insurers. Secondly, alternative capital could not have the impact that it is currently having without the language of finance. CAT models speak that language. The models provide necessary statistics for financial firms looking to allocate capital in this area. Risk transfer becomes so much more fungible once there is common recognition of the probability of loss between transferor and transferee. No CAT models, no loss estimates. No loss estimates, no alternative capital. No alternative capital, no soft market. A Needed Balance By now, and for good reason, the industry has placed much of its trust in CAT models to selectively manage portfolios to minimize PML potential. Insurers and reinsurers alike need the ability to quantify and identify peak exposure areas, and the models stand ready to help understand and manage portfolios as part of a carrier’s risk management process. However, a balance between the need to bear risk and the need to preserve a carrier’s financial integrity in the face of potential catastrophic loss is essential. The idea is to pursue a blend of internal and external solutions to ensure two key factors:
  1. The ability to identify, quantify and estimate the chances of an event occurring and the extent of likely losses, and
  2. The ability to set adequate rates.
Once companies have an understanding of their catastrophe potential, they can effectively formulate underwriting guidelines to act as control valves on their catastrophe loss potential but, most importantly, even in high-risk regions, identify those exposures that still can meet underwriting criteria based on any given risk appetite. Underwriting criteria relative to writing catastrophe-prone exposure must be used as a set of benchmarks, not simply as a blind gatekeeper. In our next article, we examine two factors that could derail the progress made by CAT models in the insurance industry. Model uncertainty and poor data quality threaten to raise skepticism about the accuracy of the models, and that skepticism could inhibit further progress in model development.

Nick Lamparelli

Profile picture for user NickLamparelli

Nick Lamparelli

Nick Lamparelli has been working in the insurance industry for nearly 20 years as an agent, broker and underwriter for firms including AIR Worldwide, Aon, Marsh and QBE. Simulation and modeling of natural catastrophes occupy most of his day-to-day thinking. Billions of dollars of properties exposed to catastrophe that were once uninsurable are now insured because of his novel approaches.


James Rice

Profile picture for user JamesRice

James Rice

James Rice is senior business development director at Xuber, a provider of insurance software solutions serving 180+ brokers and carriers in nearly 50 countries worldwide. Rice brings more than 20 years of experience to the insurance technology, predictive analytics, BI, information services and business process management (BPM) sectors.