Download

How to Turn Workers’ Comp Into an Advantage

Workers' comp costs are so high that they are either a competitive advantage or disadvantage for contractors. Your choice.|

Workers’ compensation should be a win-win proposition for employers and employees, but many contractors describe it as an insurance and risk-management pain point. Workers’ compensation premium is a major part of contractors’ total insurance costs, and the indirect costs associated with claims are a significant multiplier.
Additionally, recent formula changes have caused some companies to receive a higher experience modification rate (EMR). The increasing use of prescription medications, improper use of medical services and diagnosis of comorbidity conditions (i.e., disorders related to a primary disease) among workers are boosting medical costs, which have surpassed lost-time indemnity benefits as the largest component of loss costs. And the construction industry’s workforce shortage is leading to the hiring of less-experienced workers, who are more vulnerable to injuries.
In short, employee injuries affect productivity, quality and profitability on projects, thereby affecting a company’s overall financial performance. As such, workers’ compensation can be either a competitive advantage or disadvantage.
Companies that do not gain control over their workers’ compensation processes will face pressures to reduce costs elsewhere or carry higher levels of unallocated overhead. The result will be felt by higher insurance costs, increased bid rates and decreased productivity yields, as well as squeezed profit margins.
Start with an audit
A workers’ compensation audit diagnoses relative strengths and weaknesses of policies, procedures and protocols, and provides a roadmap to improve performance. An insurance advisor can help evaluate the company’s capabilities in three important phases, each targeting a diferent focus and desired outcome (see chart).
It’s important to review injury and claim performance metrics. A comprehensive loss analysis of the number, type, frequency and severity of claims is useful, especially when compared with exposure (whether payroll, work hours or full-time equivalency). The median duration of lost workdays per lost workday case can be compared against industry metrics by type of contracting operation. Although these are lagging indicators, they provide clues about where to focus prevention-based activities that then can be monitored as leading indicators.
Larger contractors with more payroll exposure and more complex operations also may be interested in an alternative insurance program structure, such as intermediate or large-deductible, retrospectively rated and captive insurance programs. Many contractors seek to reinforce management accountability for workers’ compensation improvement by instituting premium-allocation and loss-cost chargebacks to operating divisions or departments. This information is useful in bonus program calculations.
A workers’ compensation audit should carefully consider the classifications of workers by job type and payroll code. Proper classification is essential to ensuring workers’ compensation premiums are being properly calculated. This helps prevent adjusted premiums or return premiums following a premium audit, and helps ensure proper classification of the company’s EMR. It may be advisable to review open major claim reserves well in advance of carriers filing unit stat reporting data for purposes of calculating EMRs.

Calvin Beyer

Profile picture for user CalBeyer

Calvin Beyer

Cal Beyer is the vice president of Workforce Risk and Worker Wellbeing. He has over 30 years of safety, insurance and risk management experience, including 24 of those years serving the construction industry in various capacities.

'Component Medicine': The Medical Team Will See You Now

Historically, medicine was driven by a central force -- the primary-care physician -- but dynamics are changing.

It was an unmistakable conclusion from the recent, eight-hour Medical Institute program that was part of the IAIABC 2014 Forum. Across multiple sessions ranging from opioid abuse to insurance reform; physician dispensing to medical marijuana, one thing was clear. Medicine in this country is headed for an abrupt change, and the way workers’ compensation manages medical care will need to change with it. In reality, we can provide support and focus to that effort.

One of the most interesting points for me was the potential decentralization of medical treatment that may soon be upon us. Historically, medicine has been driven by a central force -- a primary-care provider who examines, refers and directs care for the patient. That doctor generally selected the lab, the specialists and the facilities to be used. Very few questioned these doctors, and many relied upon them for a labyrinth of health decisions. But now, economic forces and insurance reforms are dramatically changing that dynamic.

Please indulge me for a moment while we envision a different medical world, one that I call “Component Medicine,” where doctors with less individual time to commit meet more-empowered patients with new healthcare options available to them.

Everyone is aware that shrinking reimbursements and increased operational costs are forcing doctors to see more and more patients in the course of their day. At the same time, we see a rise in the utilization of physician assistants and nurse practitioners. This is reducing the singular importance of the primary-care physician and spreading the responsibility of care to other disciplines in a manner not seen previously in this country. Nurse practitioners (NP) will be on the front lines of this change as our medical models gradually shift to one of prevention and outcome-based efforts. Likewise, other traditional medical-related services are undergoing dramatic change. The traditional lab may be replaced by a clinic in your local pharmacy, where you can get your blood tests in a more convenient and less expensive location. NPs may also be at that location, or even in your workplace, where they will be able to diagnose and treat illnesses, as well as work with patients on established prevention regimens.

In Component Medicine, care will be closer to home and proactive in nature, as we begin to rely on multiple professionals to meet our healthcare needs of the future. Those professionals will be selected for cost, convenience and consistency in an overall health regimen, giving the consumer more power and influence over the care they receive. Component Medicine will be team-driven, with the centralized role of primary-care physician playing an important, yet less critical, position than previously held.

Technology will also have a hand in this. Video conferencing and mobile monitoring equipment will provide better information to these members of the Component team. That improved information will lead to more accurate care and better results.

Even alternative medicines, those historically shunned by traditional medical providers, may find a playing position on this squad. Even though they may be lacking hard scientific evidence of effectiveness, some alternative and holistic approaches hold value simply because people believe they work, and therefore should not be kept from medical care.

Workers’ comp is, in my opinion, uniquely positioned to help develop and influence the adoption of this concept. Our industry, in its role as a health provision service, has more control over the various elements of care than any other singular entity. We have access to employers, patients and the medical community. In states with directed care, we could have even more immediate impact. As an industry we could help create those systems that provide effective, affordable care close to the patient’s home, and to a greater degree work to provide and support preventive care for those in our system.

Several months ago, I attended a presentation given by Dr. David Pate, CEO of Boise’s St. Luke’s Health System, the largest medical provider in Idaho. He spoke of the need for modern medicine to “get to where the people are” in a proactive sense: in their homes, their schools, their work and their churches. The problem, he noted, was that they had not figured out a way to get paid for doing that, with the result that the “broken” fee-for-services model still ruled the day.

His concept is a classic leveraging of the time-honored adage, “An ounce of prevention is worth a pound of cure.”

Today, the workers’ compensation industry can drive that theory to reality, and with the advent of the PPACA (Obamacare), we might not even have to pay for it all. The level of control that we employ in many states, along with the access and sway we have with key players in the country, mean that we could help channel convenient Component Medical care that offered strong preventive focus for not just those in our system, but their uninjured coworkers, too.

The physician of the future isn’t an individual, rather a comprehensive medical team. It is Component Medicine, and it provides the key to effective, flexible and affordable care for our population in the future.

Data Analytics Comes of Age for Agents

You’ve already done the hard part by moving to an electronic agency management system platform. Now you need to start using your data.  

Sitting down for lunch with one of our top independent agents, I asked him about his business.  

"Things are great – we’re totally paperless now!" he responded triumphantly.

"So what are you doing with all of the data you’re collecting?" I asked.

"Oh, I’m too small to do any of that stuff," he said with a shrug.

"You’re not," I said. "In fact, it’s a powerful way for you to generate more business. Let me show you how...."

"Data analytics" sounds like rocket science—sophisticated, expensive, intimidating and beyond the reach of the typical independent agency. It isn't. Data analytics is simply the analysis of data that allows a person to make a better decision than they could without data.

The challenge occurs when there is so much data available that it becomes difficult to determine what information is relevant and what is not. It becomes even harder when the data is not stored in a way that can be easily analyzed.

Today’s technology allows people to analyze huge amounts of data in whatever form. Sophisticated software can identify patterns and relationships between millions of pieces of information that provide better insight into a subject. This is commonly referred to as "big data" analytics.

Don't get overwhelmed by these terms or the complexity of the algorithms used to analyze data. Just remember that the objective is to use data so you and your agency can make better decisions. Here are the key steps to improve your agency's performance:

Step 1:  Understand what you have

Your agency contains a treasure trove of information about your existing clients and potential customers.

Before you can even begin to run a data analytics program, spend time understanding the data you already collect. Start by creating a spreadsheet with all of the data you collect when you onboard a new client -- for example, birthdate, home and work address.

Add information you collect as part of the underwriting process. For example, if you write a BOP policy for a client, capture all the additional data an insurer needs to evaluate the risk -- the number of employees, store locations and industry.

When this spreadsheet is completed, you will discover the sheer volume of data you already collect about your clients.

Step 2: Understand what you want

Who are my most profitable clients? Are clients more profitable if I write both their commercial and personal lines insurance? How many policies per household do I need to maintain a high retention rate? How can I best target new clients? What type of people are my best referral sources? What marketing programs generate the best leads?

If you think you know the answer to these questions because you've asked them yourself, think again. Most agency owners base their answer on individual experience. That's no longer good enough. Insurance sales and marketing has transformed from an art to a science.

While the data you collect is extremely valuable, data analytics tools also allow you to incorporate outside data into your analysis. What information would you like to have about an existing client or a potential customer? What information would you like to know about a certain area or region?

Identify your "data gaps" -- information you don't have but would like to have about a client or a prospect. This might include their net worth, whether they own another home or their business affiliations.  Consider any information you would like to have about a specific geographic area or other external information that would be helpful in allowing you to attract and retain clients.

Capturing all of this additional "outside" data is beyond the capability of any individual agency. But today there are companies that do just that. Find one that offers subscription- or transaction-based solutions, with little or no start-up costs, that are easily accessible by using their secure website. Find a platform you can use any time to plug in or access the data you want.

The data relationships that you build will allow you to create a strategic advantage. Stay away from cookie-cutter solutions that just provide "answers" to data questions. They don't allow you to differentiate the results of the data analysis.

Step 3: Put the data to work

Does your agency management system have a data analytics feature or tool? If it does, subscribe to it. If it doesn’t, demand that the vendor offer such a tool.

If your agency management system doesn't have a data analytics tool, reach out to the insurance company you write a lot of business with and ask if you can partner with them on a data analytics project. Offer to share your information if they will analyze your book of business. Make sure you play a key role in defining the data to be analyzed, and most importantly make sure you define the hypothesis or data relationship you are looking to uncover.

Take action

Today, customer acquisition and retention takes place in real time, or close to it. The more information you have about current and potential customers, the better you will be able to address their needs when and where they want it. That's why you need to embrace data analytics -- it gives you the information you need, when you need it.

If you are like most agencies, you’ve already done the hard part by getting rid of your paper files and moving to an electronic agency management system platform. Now you need to start using your data.  You have a great opportunity to become a sophisticated marketer and drive better performance and growth out of your agency.

What are you waiting for?

Medical Marijuana Law: Effect in Illinois

Contrary to what you might first think, the act still permits employers to prohibit possession or consumption of marijuana on their property.

Last year, Illinois passed a “medical marijuana” law that became effective Jan. 1, 2014, known as the Compassionate Use of Medical Cannabis Pilot Program Act, 410 ILCS 130/1, et seq. The act allows doctors to recommend and certify the use of medical marijuana by patients who are under the doctors’ care for certain qualifying medical conditions. Certain rights of employers are affected, but in other ways it will be business as usual for employers. Most notably, employers cannot discriminate against a registered patient on the basis of his or her registration (in most cases). This mandate may require employers to reconfigure their drug policies and certain provisions in their employee handbooks to ensure compliance with the act. Also, there will need to be management training to educate managers and supervisors. Contrary to what you might first think, the act still permits employers to operate a Drug Free Workplace. Employers are allowed to prohibit possession or consumption of marijuana on their property. Further, the act specifically allows employers to enforce work rules, give drug tests and discipline employees exhibiting signs of impairment while at work. Employees beware! The act is not a license to possess or be high at work. Based on the rights that employers still retain, it appears inevitable that sticky issues will arise as the act is implemented and employers struggle with compliance as well as enforcing their own policies. For example, while the act expressly allows employers to conduct drug testing, what if an employee’s drug test registers marijuana use, but the test cannot differentiate whether that use was hours, days or months ago? Would refusing to hire that individual be okay as enforcement of a Drug Free Workplace, or would that decision be discriminating against an individual for his or her “status” as a registered medical marijuana patient? Moreover, the law allows employers to maintain a Drug Free Workplace “provided the policy is applied in a non-discriminatory manner.” It is unclear whether patients will be able to assert disparate impact claims arguing that employers’ facially neutral workplace policies have a statistical impact on their “protected class.” Additionally, the law requires that an employee disciplined for exhibiting signs of impairment must be given an opportunity to contest the basis for the determination, but the law does not provide any guidance as to what type of procedural protection the employee must receive. Finally, it is unclear what, if any, interplay this Illinois law will have with the federal Americans With Disabilities Act. Unfortunately, we believe that many of the gray areas surrounding the act will likely be resolved through future litigation. To make sure your clients are prepared, we suggest that you have a lawyer review your policies and procedures and provide training to your management personnel. Also, ensure that your clients have a robust Employment Practices Liability policy in place that will respond and defend the employers in case they are faced with a discrimination suit in relation to violation of the act.

Laura Zaroski

Profile picture for user LauraZaroski

Laura Zaroski

Laura Zaroski is the vice president of management and employment practices liability at Socius Insurance Services. As an attorney with expertise in employment practices liability insurance, in addition to her role as a producer, Zaroski acts as a resource with respect to Socius' employment practices liability book of business.

The Truth on Workers’ Comp Premiums

As employers try to limit their workers' compensation premiums, my suspicion is that many do not realize that insurers have traditionally relied on investment income to, in effect, subsidize underwriting costs, and that the subsidy is going away. Insurers are relying on “safer” Treasury bonds and are not realizing the returns they received before the Great Recession. Sure, employers feel workers' comp insurers are too profitable. In fact, the combined ratio -- insurers' total costs for covering work-related incidents, divided by total premiums -- was 106 in 2013, according to the National Council on Compensation Insurance. That means that, for every $100 that insurers received in premiums, they paid $106. Insurers need to shrink the combined ratio to be profitable and need to make up for the diminishing investment income, so premiums have been going up for the past three years. Experts expect this to continue. So, to control premiums, employers must improve the experience modifier that is used to calculate their rates. Employers need to address the direct and indirect costs of work-related injuries, illnesses and diseases, investing in workplace safety and return-to-work and other initiatives. Insurers must use predictive modeling to produce more sensitive risk measurements. (Here is a blog post that goes into detail.) The good news is that workers’ compensation claim costs are not out of control as they were in the past. Those of us who are old enough to remember the late 1980s and early 1990s remember just how bad it was. Liberty Mutual, often considered the largest workers’ compensation carrier in the nation, quit offering coverage in its home state of Massachusetts in the early 1990s because costs were spiraling out of control. Today, overall costs are going up but more slowly because the frequency of claims has been declining for 20 years. There are many possible reasons for why frequency has declined. Some point to reforms reducing claims eligibility for workers’ compensation. But, if this were a large factor, I think we would be seeing more work-related claims in the tort system. Others cite changes in workplace exposure. Some point to the shift in the kind of work Americans are doing. For example, high-risk jobs in manufacturing have been exported in recent years. And while some manufacturing jobs are returning to the U.S. because of lower energy costs here, more work is being automated, making it less risky. Meanwhile, still-high unemployment rates mean there is less risk exposure. I believe the No. 1 reason why frequency has declined is workplace safety. While I cannot prove this on a quantitative basis, I make my observations based on 25 years of observing workers’ compensation. Back in the early 1990s, employers were discovering how much they could lower their premiums through safety. When I was the lead reporter for BNA’s Reporters’ Compensation Report in the mid-1990s to the year 2000, I spent a lot of time writing about employers that were discovering strategies to contain workers’ compensation. Many of these approaches are now used widely. There still remain, however, many employers who need to get religion. While medical-cost inflation for workers’ compensation remains a concern, it is not in the double-digits as it was 20 years ago; it has been about 3% annually in recent years. Workers’ compensation insurers still pay more for procedures than health care insurers. Medicare will not pay for opiates dispensed by doctors, but workers’ compensation will in many states. The $1 billion question is how Obamacare will affect workers’ compensation claim costs. Some worry that claims that previously would have been handled under healthcare insurance will be shifted to workers’ comp, but I doubt it because workers’ comp is just too complicated. (It could turn out that Obamacare will be more complicated than workers’ compensation, but a worker still needs to prove work-relatedness for a claim.) As a whole, indemnity claim costs have been relatively flat in recent years. In states where the maximum weekly benefit that workers can receive is relatively low, such as Virginia, indemnity costs are naturally lower than in other jurisdictions, such as the District of Columbia, where the maximum weekly benefit is much higher. Reducing the amount of time workers are on workers’ compensation through quality medical care and return-to-work programs has also helped curtail the financial burden of claims. But, again, more employers still need to get religion, and for reasons that go beyond reducing the time that employees are on workers’ compensation. It is also true that return to work is challenging in the current economy, as there are fewer jobs available. Besides national economic factors, employer premiums are affected by the workers’ compensation conditions in individual states. California’s combined ratio has been in the triple-digits, so employers are seeing bigger premium increases than in other states. Meanwhile, there is always the political wildcard in workers’ compensation that can favor the interests of insurers, employers, organized labor, plaintiffs' attorneys and others, depending on who is in power. Employers often feel too busy to be politically involved in the workers’ compensation system but can be a critical voice for change. Employers that want to make a difference should look into joining UWC in Washington, D.C., and the Workers’ Compensation Research Institute. I have worked with both of these groups in various capacities and believe they are worth the investment. (By the way, neither organization knows I am recommending them.) Making the case for investing in workers’ compensation is a challenge. But because insurers can no longer use investment income to soften the blow of rising workers’ compensation costs, employer investment in curtailing claim costs is more important than ever.

Annmarie Geddes Baribeau

Profile picture for user AnnmarieGeddesBaribeau

Annmarie Geddes Baribeau

Annmarie Geddes Baribeau, president of <a href="http://www.lipoldcommunications.com/">Lipold Communications</a> and senior associate at <a href="http://aartrijk.com/">Aartrijk</a&gt;, is also a contributing writer for <a href="http://leadersedgemagazine.com/">Leader’s Edge</a> magazine. She has written and published several booklets and more than 500 articles for national and regional publications on topics related to workers’ compensation, health insurance, human resources and management.

Whistleblower Suits: Emerging Risk on MSP

There is an emerging risk on Medicare Secondary Payer (MSP) compliance because of&nbsp;private citizens filing lawsuits.|

sixthings
There is an emerging area of risk associated with Medicare Secondary Payer 1 (MSP) compliance. Workers’ compensation, liability, and no-fault insurance, including self-insurance plans, are exposed to penalties and conditional payments, and there may be violations of the False Claims Act (31 U.S.C. §§3729 – 3733) (FCA) that could lead to fines plus treble damages. The risk stems from lawsuits commonly known as qui tam actions that are being brought by private citizens known as relators, who are bringing these lawsuits. Relators could recover anywhere from 15% to 30% of the damages in the suits, plus attorney’s fees and costs. The success of such lawsuits largely depends on whether the U.S. intervenes as plaintiff. Companies and insurance carriers that are responsible reporting entities (RREs) must exercise caution on what data on settlements, judgments, awards and other payments is sent to the U.S. and ensure the data is consistent with the Centers for Medicare & Medicaid Services (CMS) guidelines, policies and regulations. A solid reporting solution is a critical step for protection, but must also integrate business intelligence to eliminate the submission of false claims and allow the appropriate reporting of claims. Background The FCA was enacted in 1863 by a Congress concerned over the quality of goods being supplied to the Union Army during the Civil War.  Commonly referred to as “Lincoln’s Law,” the rule depended on the private citizen to help the government identify fraud against it. This private citizen, or relator, was rewarded if the government won a judgment. During World War II, the law changed and made it harder for private citizens to assist. When their incentive disappeared, the government’s ability to identify fraud slowed to a trickle even as government contracts surged because of the war. After decades of defense contractor abuse, President Reagan, working with a bipartisan Congress, changed the law in 1986. Fines rose  from a minimum penalty of $2,000 to a range of $5,000 to $10,0002 per violation; recoverable damages went from double to treble; and, most importantly, private citizens again had incentives to coordinate with government to prosecute fraud. Today, more than 80% of FCA actions are qui tam driven, and recoveries exceeded $4.9 billion in the fiscal year that ended Sept. 30, 2012.  Such actions are predicted to increase into the foreseeable future. A qui tam, or whistleblower, claim starts with an individual being aware of a possible fraud being perpetuated against the U.S. Typically, a whistleblower works for the organization that is alleged to be perpetuating the fraud, raises a concern and then suffers an adverse employment action for doing so. The results can be costly to the organization. Consider a quality-control expert at Hunt Valve in Ohio3. Her company made valves for nuclear attack submarines and reactors. The valves were never inspected, and paperwork was fabricated. When she raised concerns, she was fired and forced to move out of town. The responsible parties, Northrup Grumman Newport News, General Dynamics Electric Boat and three other defendants, paid a $13.2 million settlement to the U.S. Also consider a pharmacist who was treated similarly by his new employer, Omnicare4. He had previously owned a “mom and pop” drugstore outside of Chicago and was a seasoned pharmacist. He discovered widespread drug switching for profit, and, when he notified his bosses, he was fired and forced to work as a temp at other pharmacies that engaged in the same bad practices. He then brought an action and secured a $120 million settlement. A third example is rare in that the relator was the CEO of a laboratory company5. He realized that a competitor was producing a particular testing product that was defective and caused dialysis patients to be overdosed with expensive and harmful drugs that Medicare paid for. He brought the test results to the competitor’s attention but was rebuffed. He filed under FCA and recovered $302 million for the government. Certain private citizens are barred from being a relator. If someone was convicted of criminal conduct arising from his or her role, the citizen is not allowed to sue6. If another qui tam concerning the same conduct has already been filed, known as the first bar rule, no suit is allowed7. Where the government is already a party to a civil or administrative money proceeding concerning the same conduct, the action is also barred8. Finally, if the information was already disclosed to the public (and the relator is not the source), the matter is barred under the “public disclosure” rule9. If allowed, a qui tam complaint is filed under seal for 60 days. During this period, the government is required to investigate the allegations to determine if it will intervene. The government can extend this period under seal if it needs further time to investigate, and typically does so. Sometimes, the government may take a year or more to decide. If the government does intervene, it has primary responsibility to prosecute and pay for it10. When the government declines to intervene, the relator can proceed on his or her own, paying the costs, and the seal is lifted. The cost to prosecute can be prohibitive, and many FCA actions fail if the government declines to intervene. However, the law does increase the relator’s share of the damages from a floor of 15% of the damages to a minimum of 25% as compensation for the additional risk. To win, the relator must prove that the defendant’s conduct, or lack of conduct, meets one of the statutory requirements under 31 U.S.C. §3729(a). The areas where most of the conduct or lack of conduct fall are: 1) knowingly submitting a false claim or record to the government for payment11; 2) knowingly avoiding the submission of a claim or record to the government to avoid the payment of money to the government12; and 3) liability for those who conspire to violate the FCA13. A prima facie case of prosecutable FCA conduct in any of the three areas would require the relator to establish: 1) the submission of a false claim/record, or avoiding the filing of a required claim/record to the government; and 2)  knowledge of the falsity itself.  31 U.S.C. §3729(b)(1) sets forth how knowledge of the false information for the claim or record can be defined. It can be (1) actual knowledge; (2) deliberate ignorance of the truth or falsity of the information; or (3) reckless disregard of the truth or falsity of the information. The fact finder will require concrete evidence to uphold the FCA violation. The relator will also be focused on the applicable regulations, rules and policy memoranda from the government. The Trends After 1986, contractors for the Department of Defense were the primary focus of the government concerning FCA because of unbridled fraud. When the law changed, both government and private citizens unleashed prosecutions against contractors such as United Technologies ($150 million), Boeing ($75 million), Teledyne ($85 million) and Litton ($82 million). As lawsuits were filed, and the substantial recoveries publicized, the industry responded with increased compliance and vigilance to the point that FCA actions are rare in this area today. Next FCA were lawsuits involving the big pharmaceutical companies. Glaxo Smith Kline paid $1.2 billion for the unlawful promotion of Paxill, Wellburtin, Advair, Lamictal and Zofran for uses not approved by the Food and Drug Administration. Johnson & Johnson paid $2.2 billion for similar off-label use promotion. These highly publicized settlements, and changes in how drug companies may interact with providers, has seen a tapering of such cases and left the FCA qui tam industry on the search for the next area of fraud, waste and abuse against the government. One method to determine the next industry trend for FCA actions is to follow the focus of certain government enforcement agencies.  The Office of Inspector General (OIG) is one such Agency to monitor enforcement actions.   The OIG has focused recovery efforts on big pharmaceutical companies, and recent focus has been on Providers for Medicare & Medicaid items and services.    FCAs have been equally as active against these Providers.  As a result, the OIG had a particularly effective year in recovering over $4.3 billion in 2013 against Providers, returning $8 for every $1 spent by the Agency. The OIG is also responsible for MSP compliance enforcement.  An example of OIG activity is the recent settlement late last year by a Texas health system for $3.67 million14.   In that situation, the Relator alleged that Baptist Health Care billed Medicare for items and services it provided to beneficiaries that were covered by other payers such as workers’ compensation, liability and no-fault insurance (Plans).  Under MSP law Medicare is allowed to pay for such items and services, when no payment has been made, or payment is not reasonably expected to be made.  If that is the situation then Medicare pays, but on the condition it be reimbursed for items and services if payment is ever made by the Plan.   That is what happened here.  The Plans made payment to the Provider, but no reimbursement occurred, and when the oversight was brought to the attention by the Relator, he was ignored.  Correction to the Program was made, but past errors were not corrected.  The Provider therefore recognized the falsity of its information, and easily satisfied the criteria for the Relator when it did not reimburse for historical errors after it was brought to their attention.  The FCA community is therefore aware of MSP violations and how it can implicate the FCA. An area that may be subject to FCA is the Medicare & Medicaid SCHIP Extension Act of 2007 (MMSEA).  This law modified the MSP to require data reporting by RREs.  To encourage participation, the government included a penalty provision for non-compliance of up to $1,000 per day, per claim for failure to report15.  The OIG has adjusted its work plan for 2013 and 2014 to look at the MMSEA and the associated penalties that arise from non-reporting of data.  OIG involvement typically precedes FCA qui tam actions.  It is this area where the greatest potential for FCA actions are likely to begin to take root. An example of a matter that nearly received government backing was the recent seal that was lifted on March 20, 2014 with respect to a U.S. District Court case filed in the Western District of New York.  The government did not choose to intervene, and the Relator is a personal injury attorney who has filed against well over 50 insurance carriers and a few trucking companies that self-insure.  The main cause of action alleged was that these companies shifted MSP risk to the United States government through the use of a general release16.  Whether there will be success under the FCA remains to be seen as the root cause appears to be brought under a FCA conspiracy theory.  The Relator will have to prove a false claim, or avoidance of filing a claim, knowledge thereof the falsity, and the impact to the government.  It is unclear, based on present allegations, if the lawsuit will pass the procedural stages, but it does demonstrate that the FCA qui tam industry is taking a serious look at the MSP area for recovery. Concerns for the RRE in this area are potentially significant.  Only recently has MMSEA data been accepted by CMS for reporting by the RRE.   As of 1/1/2010, CMS received quarterly downloads from RREs’ workers’ compensation and no-fault plans that involve cases where Ongoing Responsibility for Medical (ORM) was determined.  Pursuant to the CMS User Guides, Regulations, and Memoranda, these RREs must monitor all claims, no matter the case status that were open on 1/1/2010, re-opened or newly reported after that date.  Once identified, ORM status is to be reported, but it can be immediately terminated if certain established CMS criteria is met. On October 1, 2010, CMS started to accept the second MMSEA data element from RREs’ workers’ compensation and liability plans regarding the Total Payment Obligation to Claimant (TPOC) meeting certain value thresholds.  These TPOCS, or settlements with Medicare beneficiaries, were collected typically the quarter before reporting, and then submitted during an assigned window period set up by CMS for the RRE. The reporting requirement under the MMSEA provides a relatively straightforward way to establish a claim/record being submitted to the government under the FCA.   Whether or not it is false would depend on the Regulations, Rules, Policies (User Guides) and Memoranda from the government about what and when to report.    FCA criteria can be easily met, as it is simple to determine from the data when a claim/record was submitted or if it was missed.  Determining whether it is false would be harder, but how claim systems manage information based upon the regulations, rules, and policies could be probative on that point.  This exact issue came up in an older FCA case involving a Medicare fiscal intermediary, known as Highmark17.  This entity served two roles with Medicare, one as a Medicare contractor processing payment claims, and the other as a private provider of services.    An FCA action was brought against Highmark for inconsistent claim processes and the court found basis to sustain the FCA complaint based on the fact that the claims processing system did not properly line up with Medicare requirements.  Consistent with that ruling, the CMS User Guides and related policy memoranda would be similarly construed and therefore whether an RRE had a case to report as a TPOC or ORM would be based on how those rules would apply. An RRE’s exposure to an FCA action is mitigated if the RRE utilizes an MMSEA reporting system that is tested.  Most MMSEA reporting systems are compliant with the technical aspects of the CMS User Guides; however, they lack the processes that integrate the CMS regulations, policies and user guide rules to allow the end-user to enter the appropriate data.  Most reporting systems lack a MMSEA solution with built-in business intelligence to allow the right information to be entered at the right time.  The adjuster responsible to enter the data at the critical points needs to be guided to ensure correct submission of data to the government. Franco Signor LLC processes over 2M records each month to the government for RREs.  We have audited over 1,900 RREs and have drawn the conclusion that the MMSEA reporting systems are sound, but the data being populated by the front-lines is not consistent with known rules, regulations and policies of Medicare.  We have recommended business intelligence methodology to guide the adjuster to avoid the potential MSP exposure, as well as the emerging risk of associated FCA exposure.  The cost is minimal to secure a base line on MSP compliance performance.  Integration of business intelligence takes time, but must be accomplished before MSP penalties become fully enforceable.  Do not be the RRE whose MMSEA reporting system and methodology is tested by an FCA or qui tam action. [1] 42 U.S.C. §1395y(b) [2] Today the FCA penalty range is set at $5,500 to $11,000 based on auto triggers within the legislation [3] Gonter v. Hunt Valve Co. 510 F.3d 610 (2007) [4] http://www.quarles.com/omnicare-settles-more-allegations-2013 [5] http://www.phillipsandcohen.com/Success-for-Clients/P-C-s-Successful-Whistleblower-Cases.shtml[6] 31 U.S.C. &sect;3730(d)(3) [7] 31 U.S.C. &sect;3730(b)(5) [8] 31 U.S.C. &sect;3730(e)(3) [9] 31 U.S.C. &sect;3730(e)(4)(A) [10] 31 U.S.C. &sect;3730(c)(1) [11] 31 U.S.C. &sect;&sect;3729(a)(1)(A) and (B) [12] 31 U.S.C. &sect;3729(a)(1)(G) [13] 31 U.S.C. &sect;3729(a)(1)(C) [14] http://www.francosignor.com/blog/medicare-jurisdiction/medicare-secondary-payer-act-implicated-in-false-act-claim-against-hospital [15] 42 U.S.C &sect;1395y(b)(8) [16] U.S. v. Allstate Insurance Company, et al., Case #cv-01015-WMS, U.S. Dist. Court for the Western District of New York. [17] http://www.paed.uscourts.gov/documents/opinions/04D0039P.pdf

Roy Franco

Profile picture for user royfranco

Roy Franco

Over the past two decades, Roy A. Franco has emerged as one of the principal architects of policies and practices that define the world of Medicare Secondary Payer (MSP) compliance. From his experience as director of risk management for Safeway from 1993-2010, he realized the need for greater clarity and efficiency in matters related to Medicare compliance.

TRIA: A Real Need, and the Time Is Now!

Without an extension of the Terrorism Risk Insurance Act,&nbsp;many high-profile properties will be unable to secure coverage.&nbsp;

The Terrorism Risk Insurance Act (TRIA) was initially passed in November 2002 as a response to the terrorist attacks of Sept. 11, 2001. Private insurance carriers had responded to the attacks by excluding acts of terrorism from coverage, and TRIA was needed to entice private carriers to once again cover this risk. By providing the necessary reinsurance so the insurance industry can properly define and limit the financial impact of another significant terrorist event, TRIA-backed terrorism coverage is widely available today at affordable costs.

Currently, TRIA is scheduled to expire at the end of 2014,and Congress is actively debating whether to extend or modify the current coverage. Many in Congress argue that TRIA is no longer necessary. They feel that the threat of terrorism has diminished and that the private insurance industry will continue to provide terrorism coverage without the federal government backstop. 

The bombings at last year's Boston Marathon highlight that, indeed, the U.S. still faces a very real threat of terrorist attacks. And the threat is far greater than many realize. A March 2014 report from the Insurance Information Institute highlighted the continued threat of terrorism in the U.S. by detailing 21 separate attempted terrorist acts that were thwarted by law enforcement between 2009 and 2013. Unquestionably, the threat of terrorist attacks against the U.S. remains high.

The pending expiration of TRIA is highlighting what the private carriers' response will be without this financial backstop. Property carriers are tying their terrorism coverage expiration dates to the expiration of TRIA. Without TRIA, many high-profile properties will be unable to secure coverage from the private marketplace. Workers’ compensation coverage is statutory and cannot exclude terrorism as a cause, so carriers in this market are responding to TRIA’s pending expiration by declining coverage to employers in certain geographic areas beyond the end of 2014. Regardless of location, industries with a high concentration of employees, such as healthcare, higher education, defense contractors, financial services and technology companies, are also finding limited markets beyond the end of 2014. This leaves employers with fewer options, which will ultimately result in increased pricing.  

But there is a solution. TRIA works. It provides the high-level backstop that the insurance industry needs to forecast potential exposure to a terrorist event and allow companies to underwrite the coverage. TRIA is designed to only be triggered by an extremely large event, even beyond the scope of the Sept. 11 attacks. There are also recoupment provisions built into the law that will repay the federal government if TRIA is triggered, so it is not simply a handout to the insurance industry. TRIA has truly been one of the most successful public/private partnerships in recent memory.  

The time to act is now. Congress is debating the issue. I encourage you to reach out to your members of Congress and let them know your thoughts on this important topic:

http://www.usa.gov/Contact/US-Congress.shtml#Contact_Your_Representative_in_the_U.S._Congress

Finally, Marsh recently released its 2104 Terrorism Risk Insurance Report. This report summarizes the current outlook regarding TRIA’s potential expiration, provides benchmarking related to terrorism insurance take-up rates and pricing and offers alternative insurance and risk management solutions for terrorism risks that will be useful for organizations even if TRIA is renewed or extended. I encourage you to read the full report here.

The Science (and Art) of Data, Part 2

There are not enough good data scientists to go around. So, should&nbsp;you "buy" them, "rent" them or "build" them. A hybrid may be the answer.

Given the high need and growing demand for data scientists, there are definitely not enough of them. Accordingly, it is important to consider how an insurer might develop a core talent pool of data scientists. As it is often the case when talent is in short supply, acquiring (i.e., buying) data scientist talent is an expensive but fairly quick option. It may make sense to consider hiring one or two key individuals who could provide the center of gravity for building out a data science group. A number of universities have started offering specialist undergraduate and graduate curricula that are focused on data science, which should help address growing demand in relatively soon. Another interim alternative is to “rent” data scientists through a variety of different means – crowdsourcing (e.g., Kaggle), hiring freelancers, using new technology vendors and their specialists or consulting groups to solve problems and engaging consulting firms that are creating these groups in-house.

The longer term and more enduring solution to the shortage of data scientists is to “build” them from within the organization, starting with individuals who possess at least some of the necessary competencies and who can be trained in the other areas. For example, a business architect who has a computational background and acts as a liaison between business and technology groups can learn at least some of the analytical and visualization techniques that typify data scientists. Similarly, a business intelligence specialist who has sufficient understanding of the company’s business and data environment can learn the analytical techniques that characterize data scientists. However, considering the extensive mathematical and computational skills necessary for analytics work, it arguably would be easier to train an analytics specialist in a particular business domain than to teach statistics and programming to someone who does not have the necessary foundation in these areas.

Another alternative for creating a data science office is to build a team of individuals who have complementary skills and collectively possess the core competencies. These “insight teams” would address high-value business issues within tight time schedules. They initially would form something like a skunk works and rapidly experiment with new techniques and new applications to create practical insights for the organization. Once the team is fully functional and proving its worth to the rest of the organization, then the organization can attempt to replicate it in different parts of the business.

However, the truth is there is no silver bullet to addressing the current shortage of data scientists. For most insurers, the most effective near-term solution realistically lies in optimizing skills and in team-based approaches to start tackling business challenges.  

Designing a data science operating model: Customizing the structure to the organization’s needs

To develop a data science function that operates in close tandem with the business, it is important that its purpose be to help the company achieve specific market goals and objectives. When designing the function, ask yourself these four key strategic questions:

  • Value proposition: How does the company define its competitive edge?  Local customer insight? Innovative product offerings? Distribution mastery? Speed?
  • Firm structure: How diverse are local country/divisional offerings and go-to-market structures, and what shared services are appropriate? Should they be provided centrally or regionally?
  • Capabilities, processes and skills: What capabilities, processes and skills do each region require? What are the company’s inherent strengths in these areas? Where does the company want to be best-in-class, and where does it want to be best-in-cost?
  • Technology platform: What are the company’s technology assets and constraints?

There are three key considerations when designing an enterprisewide data science structure: (a) degree of control necessary for effectively supporting business strategy; (b) prioritization of costs to align them with strategic imperatives; and (c) degree of information maturity of the various markets or divisions in scope.

Determining trade-offs: Cost, decision control and maturity

Every significant process and decision should be evaluated along four parameters: (a) need for central governance, (b) need for standardization, (c) need for creating a center of excellence and (d) need for adopting local practices. The figure below illustrates how to optimize these parameters in the context of cost management, decision control and information maturity.

This model will encourage the creation of a flexible and responsive hub-and-spoke model that centralizes in the hubs key decision science functions that need greater governance and control, and harnesses unique local market strengths in centers of excellence. The model localizes in regional or country-specific spokes functions or outputs that require local market data inputs, but adheres to central models and structures.

Designing a model in a systematic way that considers these enterprise-wide business goals has several tangible benefits. First, it will help to achieve an enterprisewide strategy in a cost-effective, timely and meaningful way. Second, it will maximize the impact of scarce resources and skill sets. Third, it will encourage a well-governed information environment that is consistent and responsive throughout the enterprise. Fourth, it will promote agile decision-making at the local market level, while providing the strength of heavy-duty analytics from the center. Lastly, it will mitigate the expensive risks of duplication and redundancy, inconsistency and inefficiency that can result from disaggregation, delayed decision making and lack of availability of appropriate skill sets and insights.


Anand Rao

Profile picture for user Anand_Rao

Anand Rao

Anand Rao is a principal in PwC’s advisory practice. He leads the insurance analytics practice, is the innovation lead for the U.S. firm’s analytics group and is the co-lead for the Global Project Blue, Future of Insurance research. Before joining PwC, Rao was with Mitchell Madison Group in London.

Many Agents Expose Themselves to Dangers

Some 90% of E&amp;O suits against agencies could be prevented through careful attention to practices and procedures.

Many insurance agents are confused about their role, which brings about misplaced loyalties and greater E&O exposures.

Let’s start with a question: Does the agent owe the policyholder the common law duty of good faith and fair dealing? Most insurance agents would respond with a resounding “yes” – but they’re wrong.

The duty of good faith and fair dealing is a non-delegable duty that applies only between the parties to the contract, and the parties are the insurance company and the insured – not the agent. Put simply, the agent is not the agent of the policyholder. The duties of good faith and fair dealing belong to the insurance company, not the agent.

So what duties does an insurance agent owe to the policyholder/applicant? Under common law, there are really but two:

  • Use reasonable diligence in attempting to place the requested insurance.
  • Inform the client promptly if unable to do so.

That’s it!

Some states may provide for a “special relationship” to have been created, which may provide for some additional duties. However, such a relationship is state-specific, requires some acts of commission to create and is beyond the parameters of this article.

Under statutes, there is really only one duty: Refrain from deceptive trade practices.

Every agent knows that the insurance code has a lot of pages devoted to prohibited practices. However, a careful review of the NAIC model law (upon which all states base their deceptive trade practices code) finds that all deceptive trade practices applicable to an insurance agent involve commission of an act, not the omission of an act. Under the model law, doing something incorrect is worse than not doing anything. Insurance agents may assume some duties that are not imposed upon them by law, thinking that they have such duties. If duties are “assumed,” even through ignorance, the law will hold agents to a professional standard for those assumed duties. If you make yourself out to be a coverage expert, the law will hold you to that expert standard.

Some 90% of E&O suits against agencies could be prevented through careful attention to practices and procedures.

By contrast, the duties owed by the agent to the insurance company are many. As a fiduciary of the principal, the agent owes the company:

  • Loyalty
  • Utmost good faith
  • Candor/full disclosure
  • Refraining from self-dealing
  • Integrity, skill and care
  • Fair and honest dealing
  • Duty to follow instructions

Something many insurance agents may not have considered: Your responsibility to not breach your fiduciary duties to the insurance company are the largest part of your professional/ethical responsibilities as an agent.

(It is not a two-way street. The insurance company is NOT a fiduciary of the agent. In other words, an agent acts on behalf of the insurance company, but the insurance company does not act on behalf of the agent. Under common law, the insurance company only owes the agent: indemnification, payment of compensation and fair dealing.)

Some confusion may occur about agents’ responsibilities because of two issues: vicarious liability, which holds that a principal may be held liable for actions by its agent, and the legal maxim that a wrongdoer is ultimately responsible for his own wrongdoing. If an insurance company is held liable for the wrongdoing of its agent (vicarious liability), the insurance company can seek recovery from the agent, (holding the wrongdoer ultimately responsible).

If the insurance company is held vicariously liable for the agent’s wrongdoing, a decision to seek recovery from the agent may depend on:

  • What did the agent do wrong?
  • What recovery did the insured get?
  • What recovery is available to the principal (the insurance company)?
  • What was the agent’s thinking?

A common misconception is that all one has to do to avoid personal liability is to establish a corporation or limited liability entity. That is incorrect because:

  • Professional liability is personal liability.
  • Fiduciary liability is personal liability.

Summary

Insurance agents may assume many duties not imposed upon them by law. Assuming those duties holds the insurance agent to a professional standard not otherwise imposed.

The majority of an agent’s duties are owed to the insurance company, and it is the company’s vicarious liability for the actions of the agent that may ultimately get the agent sued. In other words, the biggest E&O exposure an agent may face is ultimately an action brought by the insurance company because of a wrong action or breach of fiduciary duties. Knowing this makes it all the more important that the agent fully understand and trust the insurance company before assuming the responsibilities and duties imposed upon agents.

What the Next-Gen Insurer Will Look Like

The journey to the Next-Gen Insurer has started, with or without you. The longer you wait to begin your journey,&nbsp;the more difficult it becomes.

Innovation is a crucial strategic mandate that is defining a new era of winners and losers. From retail to entertainment and everything in between, decades of business traditions and assumptions are toppling because of change – change that runs the gamut from customer behaviors and expectations to the use of new technologies. This level of change and disruption is unprecedented in the history of the insurance industry. And the pace just doesn't slow down: new technologies, the mash-up of technologies, new uses for these technologies, new competition, new customer behaviors, needs and expectations. These changes are demanding a new and responsive insurance industry.

At the same time, the impact of influencers is escalating -- from both inside and outside the industry -- and the explosion of data, the lifeblood of insurance, is creating new challenges as well as opportunities. This blitz is challenging and disrupting sacred business and operational models and assumptions, requiring new thinking, experimentation, the adoption of new technologies and yes … innovation. Many insurers, large and small, are grappling with getting their heads around how the business of insurance will change in the next three to five years.

While looking to the future has long been a part of our very culture, our ability to envision the future for insurance companies is often stymied by the priorities and challenges of today. However, if we want a future, we must rethink how we embrace innovation as the core of the Next-Gen Insurer.

A Next-Gen Insurer must reimagine the core components of insurance – the business models, products and services, infrastructures,and customers. All need to be underpinned by a culture that embraces collaboration, transformation and innovation. Forward-thinking insurers are defining what they will look like three, five and 10 years from now, planning how they will respond to influencers within and outside the industry, the path they will take to get there and the relationships that will fuel the journey.

Many insurers are on the journey, but they are going at different speeds and focusing on the different priorities that will uniquely differentiate and position them as market leaders. Some are reimagining the fundamentals of insurance, while others are retooling products, services, distribution and processes. Regardless of the approach, becoming a Next-Gen Insurer is a long-term, enterprisewide endeavor. It’s important to think big even though actions may start small.

So how to begin?

First, recognize that the innovation journey has started, with or without you. The longer you wait – the more difficult it becomes, and the more likely it is to be detrimental to your long-term business. Insurers must define their unique vision for how they will evolve into a Next-Gen Insurer by examining the fundamentals of the insurance business and determining how new levels of agility, flexibility, creativity and competitiveness can be created. There are four critical business components that insurers must reshape in their Next-Gen Insurer model: the customer, products and services, infrastructure and business model.

At the same time, companies must identify, track, assess and define how to respond to or leverage key influencers and trends. Prioritize them, developing scenarios and plans of action, experimenting and collaborating. This is paramount, not just for competitive advantage but for long-term survival. The coming years promise unparalleled opportunity for insurers to increase their value to their customers. Those that best capitalize on the key influencers will realize the most in rewards. In contrast, those that do not prepare for the future will find themselves falling behind, losing both competitive position and financial stability.

Equally critical is recognizing that no business, regardless of size, can go it alone and expect to lay hold of all the possibilities and reap all of the benefits. Most insurers lack the time, expertise and resources to track all of the influencers unless they engage outside industry resources. Insurers must identify partners who can mobilize an ecosystem of both internal and external relationships and resources to capture potential, change legacy cultures and enable the ideas and technologies that can be uniquely deployed within their companies to create their Next-Gen Insurer.

But most importantly, create and nurture a culture of innovation that starts at the top and is seen, heard and acted upon each and every day. Begin by identifying those within your organization who are the outside-the-box thinkers: those renegades and dreamers who can be advocates on the journey.

The innovation journey toward reinventing the business of insurance has started. Don’t delay, because what is innovative today will be expected tomorrow.

Begin your journey today -- to ensure that you have a tomorrow.

For information about a detailed report on the Next-Gen Insurer, click here. To learn more about where the leaders in the industry are in their innovation journey, consider attending the 2014 SMA Summit in Boston Sept. 15, 2014.