Download

Things Not Mentioned in ProPublica

The ProPublica series makes valid points about problems in workers' comp but misses some context and omits many, many positive stories.

By now, most of you have read the series of articles published by ProPublica on “The Demolition of the Workers’ Comp.” These well-written articles touched on some very important issues faced by our industry. For example, caps to the wage-replacement benefits provided under workers’ compensation can devastate employees earning higher wages. In addition, there is wide variation in the total-loss-of-use benefits provided under the various state systems. Legislators around the nation and people in the workers’ compensation industry would do well to carefully consider some of the issues raised by ProPublica. As someone who has been in the workers’ compensation industry for more than 25 years, I also found there were several shortcomings in the ProPublica stories. For example, ProPublica failed to touch on what is often the biggest reason behind an injured workers’ poor recovery, and that is the secondary-gain motivation of unscrupulous medical providers and attorneys. I remember years ago, when I first started handling claims, Texas had the worst workers’ compensation system in the nation. Plaintiff attorneys would refer injured workers to physicians who had been sued so many times that they had lost privileges at every hospital in the state. But that didn’t discourage these physicians, who had set up operating rooms in their offices and continued ruining the lives of so many injured workers. Many of the injured workers who were treated by these physicians were left in ruin both physically and financially. But this harm to injured workers was not being done by insurance companies or employers or the state systems. This harm to injured workers was being done by the attorneys those injured workers trusted and the doctors who worked with those attorneys. These attorneys and doctors were not motivated by what was best for the injured worker. They simply wanted a payday. Today, Texas has one of the best workers’ compensation systems in the nation. What changed? The legislators and regulators realized that they had to stop those who put their self-interests ahead of injured workers. Legislators instituted treatment guidelines, mandatory second opinions, attorney fee caps and approved physician panels. Over time, those doctors who put their financial interests ahead of the health of injured workers were removed from the system. Texas is a remarkable success story that illustrates how the workers’ compensation system can be improved to provide better outcomes for both injured workers and employers. ProPublica also noted that insurance premium rates are at a 25-year low. While this may be true, it is an extremely misleading statistic. Over time, there have been significant advances in loss prevention and safety. Workplaces are safer than they were 25 years ago. Lower "rates" reflect both safer workplaces and more competition among carriers. However, lower “rates” do NOT mean employers are paying lower premiums than they were 25 years ago. I would challenge you to find any employer that is paying lower premiums than 25 years ago. For most employers, premiums have increased steadily over time, and they continue to increase. Rate is but a single element in the calculation of premiums, so looking at rate alone as a performance measurement does not provide an accurate reflection of the true picture. Also, based on National Council on Compensation Insurance (NCCI) data, claims costs have risen pretty steadily over the last 20 years, as well. So, premiums and claim costs are significantly higher than 25 years ago. Those data elements are a better reflection on the actual state of the workers’ compensation industry than rate. Many choose to use the ProPublica articles as an indictment on the entire workers’ compensation system. While I agree the system is far from perfect, the system functions quite well most of the time. The vast majority of injured workers receive medical treatment and return to work in their pre-injury job without any conflict or complications. Those workers do not retain attorneys, suffer financial hardship or have significant lasting physical effects from their work injury. For these workers, the system does exactly what it is intended to do. There will always be examples of injured workers for whom the workers’ compensation system produced an undesired result. It is impossible to design a perfect system. However, for every example that can be provided where the system didn’t work, I can provide you examples where it worked very well. In the workers’ compensation industry, we are in the business of helping people recover from injury and resume their place as a productive member of our society. For the most part, this is something the industry does very well. My company deals with catastrophic injuries and other high-dollar claims. Our caseloads are literally the “worst of the worst” in the workers’ compensation industry. We see horrible, life-changing, devastating injuries every day. Yet, in spite of this, I can provide countless examples of how the efforts of our staff, the employers we insure and the claims adjusters and service providers we work with went above and beyond what was required under the workers’ compensation statutes. For example:
  • A paraplegic worker lived with his family in a dilapidated mobile home that was insufficient for his wheelchair. We were obligated to provide him with comparable housing, which would have been a new mobile home. Instead, we purchased a house that was large enough for him and his family, complete with all the necessary modifications needed to make the home handicap-accessible.
  • A paraplegic worker used to enjoy hunting with his family. We purchased an all-terrain wheelchair in addition to his regular wheelchair so he could continue to enjoy this activity with his family.
  • We recently purchased experimental motorized leg braces for a paraplegic worker. Because of these braces, he is able to walk again.
  • I have seen numerous injured workers whom we have assisted in recovering from an addiction to opioid pain medications. When these injured workers are free of this addiction, their quality of life and their relationship with their families is significantly improved. We have had both injured workers and their family members thank us for helping them overcome this addiction.
I’m sure every workers’ compensation carrier or third-party administrator has similar examples they could share. My point is, for everything wrong that people can identify about the workers’ compensation system, there are also a lot of good things about it. There are also many good people in this industry who work very hard to assist injured workers in their recovery. At the end of the day, claims adjusting is about helping people. Perhaps we, as an industry, need to do a better job sharing the positive stories of what we do every day to make the lives of injured workers better.

What Risk Reports Won't Tell You

Monthly risk reports typically look at a single point (usually P80) that hides crucial information about the probability and impact of a risk.

||||||||||
Usually, the first questions the project director asks are,
  1. “What are the top 10 risks by cost P80?”
  2.  “What is the P80 of cost risk?”
  3. “How does the total compare with the cost contingency?”
These seem like fundamental, simple questions for a project director, but they actually display a complete failure to understand the nature of risk or risk over time. In this short paper, I want to summarize just what information monthly risk reports can provide that is useful to the project managers. 1.     Quantitative Risk Analysis Monte Carlo simulation is the core of quantitative risk analysis (QRA) and is used to combine risk distribution assessments for probability and consequence. Risk is historically defined as the product of probability and consequence (De Moivre 1711). But multiplying two distributions together is no casual mathematical exercise. On a mega-project, there can easily be a thousand-plus risks. The sum of all the products of the individual risks is a distribution for the total risk. Risk has two components: i. Probability of occurrence, the subjective belief that it will occur. This is a binary distribution because it has two states -- i.e., it happens or doesn't happen -- and is called a Bernoulli distribution ii. A consequence measured in terms of cost, delay or performance deterioration. This is also a distribution. In project risk, three-point triangular or PERT distributions are commonly used. With the understanding that risk is composed of two probability distributions, one can see that describing risk magnitude in the "project management way," by a single value (the P80 of cost) doesn't make any sense at all. The usual way to show a risk distribution for either an individual risk or for total risk is with a Pareto graph, which combines a probability density function (pdf) and a cumulative density function (cdf). These are also known as a histogram with an S-curve.

Untitled Figure 1. A Pareto Graph

2.    What are the Top 10 Risks?

It is common for the project director to request the top 10 risks in monthly risk reports for both cost risk and schedule (delay) risk. These are usually ranked in descending of P80. What is P80? This means the 80 percentile of the distribution -- 80% of the data points are to the left of the 80th percentile and 20% to the right. The interpretation of this is that one can be 80% sure that the cost or delay will be at that value or less and, conversely, that one can be 20% sure that the cost/delay will be greater. Some companies use the P90, which suggest they are more risk averse. Some use P75, which is the upper quartile, Some use P68.2, which is one standard deviation – the statistical metric for uncertainty. And some companies use the P50, which is the same as tossing a coin. It is not possible to use Pareto graphs to identify the top risks. This is best done using either or all of the following graph types:
  1. Box and whisker graph
  2. Tornado diagram
  3. Density strip
All of these three methods work well in visually presenting the risks in order of magnitude, although the tornado chart is rather a "black box" method that may give different results from the other two graphs.

Untitled Figure 2 Box & Whisker Graph

Untitled Figure 3 Tornado Diagram

Untitled Figure 4 Impact Density Strips

It is important to understand that the P80 value does not tell one which is the biggest risk; the P80 is a single point on the pdf that simply means that one can be 80% sure that it will cost $X or less or that you can be 20% sure that it will cost $X or more! Do you get the message there about uncertainty? To truly explain this important point, I have plotted 10 risks, all with approximately the same P80 = 54.2, in the iso-contour graph below. Each of the risks has a different consequence and different probability. Untitled Figure 5 Iso-Contour Chart of 10 Risks With P80=54.2 Using the box & whisker plot and impact density strip, it should be immediately apparent, even to the untrained eye, that the risks are very different in terms of uncertainty and consequence. The challenge is determine which is biggest.

Untitled Figure 6 Density Strip of the 10 Risks

  Untitled

Figure 7 Box & Whisker Plot of the 10 Risks

We can see that risk 5 is actually quite certain, whereas risk 2 is very uncertain, and yet they both have the same P80. Here we need to understand how to deal with a risk and its certainty. It should now be clear that ranking and prioritizing risks on the basis of P80 alone is neither correct nor particularly meaningful, as all evidence of the probability distribution and impact distribution are missing. The three graphical solutions – box plot, density strip and tornado diagram -- make it easier for the managers to prioritize the risks visually by relating directly to both consequence and uncertainty. 3.    What Is the Significance of the Total P80 Cost? Almost the very first number that appears in the monthly risk report will be the P80 total for all cost risks. You might wonder why the P80 instead of the P90 or P50 or the standard deviation (P68.2). To project directors, the P80 is a magic number that can be shared with colleagues, the directors, the client. Why the P80 became the popular percentile is unknown. There is obviously a relationship between risk aversion and risk taking -- the more risk averse, the higher the P value that is preferred. -- Contingency as a percentage of baseline cost The project planning process will involve detailed cost estimates by quantity surveyors and cost engineers. These estimates will become the baseline cost of the project covering materials, labor and inflation. The risk manager will endeavor to get the cost team to do a risk review and build a range of uncertainties around the costs. During the design stage, this will usually be a +/-25% ball park figure, with the range narrowing as design and time progress. The formula used for determining cost based contingency is usually: P80 of cost estimate – base cost = contingency Often, the cost team includes project risks in the calculations, which are based on their personal experiences, which are usually undocumented and which inflate the base cost. You do not want this to happen. The planning team will, at the outset, establish some percentage of the total cost as a contingency. On the most recent mega-project valued at $2.3 billion, the contingency was 7% of the total forecast cost. How this contingency was determined was undocumented but presumably based on some experiential rule of thumb of the planning team. Curiously, this figure was shown on the management reports as a P80, presumably in an endeavor to give credibility to the contingency figure. -- Contingency as a function of risk assessment The risk management process is a journey over the duration of the project. It starts at the design phase, progresses through manufacturing, then on to construction and finally to commissioning. Although these are broadly distinct phases, there will be many overlapping time periods. The time of greatest risk will be during the design phase, when everything is pretty much unknown to all the project team. The uncertainties will be legion, from planning permission to technology, contracts to quality control, civil engineering works to change management. The risk should appear as a series of waves, growing rapidly during the design phase and then decreasing until approaching zero as the problems are solved. After all, you wouldn’t begin a project with huge quantities of unresolved risk. The graph below gives a idea of the risk over time over the course of the project:  

Untitled Figure 8 Risk Over Time

As each phase progresses, the risk will ebb and flow, progressively decreasing as the project concludes successfully. The risk total for the month will have meaning only in the context of the previous month’s risk total, the phase of the project and the forecast for the future risk over the course of the project. Untitled Figure 9 A Box & Whisker Plot of the First 10 Monthly Total Risk Values It can be seen from Figure 9 that the risk is progressively increasing until month 9, after which it appears to start declining. Risk will follow the phases described in Figure 8. Risk can be graphed according to each individual phase or a global overview. It should be apparent that the P80 doesn’t help the project director understand the current or future risk on the project, the nature of the uncertainty or the risk over time. A simple enhancement in Excel combining Figures 8 and 9 is given in Figure 10 so that deviations from forecast are clearly visible and comparable. Untitled

Figure 10 Current Monthly Risk Total Vs. Forecast P50 & P90.

The range of uncertainty in the current situation and the forecast are clearly displayed. Alternative measures of uncertainty can be used -- e.g., mean+/- 1 standard deviation.

In Figure 10, there is a noticeable  discrepancy between the current total risk and the forecast. It is essential to understand and report on the source -- for example, possibilities such as these:
  1. Fewer risks have been identified than expected
  2. The quantification of risks is too optimistic, i.e., lower cost
  3. The handling plans are assessed as more effective
  4. The forecast risk is higher than actually being experienced during design stage
  5. The design phase is running behind schedule
  6. Improved estimation skills are required, so a calibration training course needs to be put in place
It is important for the project director to understand exactly what is being measured in this concept of total risk. Useful reference: How to Manage Project Opportunity and Risk: Why Uncertainty Management Can be a Much Better Approach Than Risk Management, by Stephen Ward and Chris Chapman.

Gavin Lawrence

Profile picture for user GavinLawrence

Gavin Lawrence

Gavin Lawrence has 18 years of experience as a risk manager for international mega-projects in the UK, Africa, Venezuela and Russia. Projects include high-speed rail, subsea pipelines, oil rigs, underground metro systems, offshore windfarms and urban redevelopment projects.

Convenience, Meet Technology -- 4 Steps

We now live in a one-click-to-buy world where customers demand extreme convenience based on deep understanding of their needs.

With technology rapidly changing, customers now expect simple, fast transactions from companies. These expectations have helped create a one-click-to-buy world, further changing not just how, but why and where, we spend our hard-earned money. On-the-Go Shopping Goes State-of-the-Art  A case in point is multinational grocer Tesco, based in the UK, Known in Korean markets as Homeplus, Tesco sought to better understand the dilemma that grocery shopping posed for many time-compressed people there. Why not bring the store to the people? Why not turn wait-time during daily commutes into time for shopping? Why not use readily available technology to give consumers grocery stores in the palms of their hands? “Virtual” grocery stores, complete with mock aisles and fridge displays, were placed in mass transit stations. Commuters scanned the QR codes of the products they wished to buy with their smart phones and scheduled a home delivery at a time convenient for them. Tesco’s online sales skyrocketed. Why did this work? First, Tesco recognized the thing most desired by consumers: convenience. Second, Tesco used the best tools, such as smart phone apps and mobile technology, to give consumers what they wanted. Hearing Customers, Responding With Useful Technology It’s not just about having a slick interface (though that never hurts). Companies that are on top of the online shopping game are providing seamless and smart integration of both in-store and online customer experiences. The use of “virtual” stores by Tesco was a smart move, allowing customers to explore the technology in a setting that was familiar to them, making the acclimation to full online shopping seamless. The “aisles” and “fridges” brought with them the comfort of a brick and mortar store, allowing for the browsing and impulse buying that we often associate with picking up our groceries. However, the use of the smart phone app upped the ante and scratched other consumer itches: the desire for speed and convenience. Four Steps to Take: 
  1. Seek to have a deep understanding of your customers' needs.
  2. Combine that understanding with technology, to build a truly singular and timely customer experience that is both appealing and significant
  3. Integrate mobile and online technologies into the company’s identity and marketing
  4. Bring the best uses of technology to the company and deliver that tech-savvy company to the consumer
Tesco offers us a portrait of what it looks like to utilize technology to stay competitive in an increasingly diverse market. And remember, at this heart of success is a deeper understanding of what the customer wants.

Donna Peeples

Profile picture for user DonnaPeeples

Donna Peeples

Donna Peeples is chief customer officer at Pypestream, which enables companies to deliver exceptional customer service using real-time mobile chatbot technology. She was previously chief customer experience officer at AIG.

The Questions to Ask on Telemedicine Risk

Insureds often ask, "Am I covered if...?" That can be a good question but, for complex issues like telemedicine, must lead to dozens more.

A conversation with our insureds often begins with them asking, “Am I covered if…?” As an insurance carrier risk manager, I’m happy that they’re asking (and I’m really happy if it turns out they are asking before starting the business, practice, procedure or service that prompts the question). The question gives us all an opportunity to refocus on applying the risk management process to identify, analyze, treat and then re-evaluate the risk. Increasingly, these types of questions among healthcare providers are related to some kind of telemedicine service or activity, and they are interesting questions. The applications for telemedicine are growing exponentially (as is the variety of providers), much as TV and microwaves caught fire decades ago (figuratively, not literally, although hazards are also a domain of risk management). Why the surge in interest? For starters: healthcare cost containment; an increasing financial incentive; increasing market share; access to specialists and other care providers in rural or other underserved areas; access to healthcare for those who are unable to travel; convenience and time management (both for patients and providers) and continuity of care. Although “risk avoidance’ is a legitimate risk technique, it can be an easy out and may be a natural reaction to an emerging risk area such as telemedicine. It is critically important that today’s healthcare risk managers not be known as “the just say no people.” As a former clinician, I have seen how cutting-edge approaches to medicine have advanced the state of healthcare. And, telehealth is here to stay! Let’s get back to the “Am I covered?” question. The short answer would, I hope, be: “Of course you’re covered. Telemedicine is a good thing for patients.” Right? But, (yes, risk managers like to say “but,” usually after saying, “I don’t want to be negative about this idea, service, procedure, etc.”) a certain road is paved with good intentions, and there are some significant possible risk exposures and obstacles to implementing a safe, compliant and effective telemedicine program. Although risk managers are generally fun people – honest -- we can be perceived as wet blankets at times, because we always look at the world through the “risk vs benefit” lens. So, in reality, insurance carrier risk managers may need to say, “It depends. . . .” and then, “I have some questions.” This is where you can help by confidently putting your best foot forward and saying, “Ask away, I have the answers you need.” A risk manager’s job (yours and mine), first and foremost, is to protect the assets of the organization. And that protection certainly includes helping to appropriately support and defend good clinical care and those who provide it, especially when someone registers a complaint or pursues a professional liability claim. We need to not only look at specific professional liability insurance policy language, but we must also consider all of the risks and exposures associated with telemedicine and determine whether you are adequately treating those risks. Insurance coverage is just one of the risk treatment techniques (risk transfer) used to address risk exposures – even though it can feel like the most important one to those who are kept awake at night worrying about an organization’s risk exposures, liability and finances. In today’s complex healthcare environment, there is a need for an operational (enterprise) risk management approach to decision making to help manage liability exposure across all of the domains of risk. Along with that, the role of the healthcare risk manager is evolving into taking a leadership role in this approach. But developing a culture and discipline of identifying, analyzing and treating risks is more of a journey than a sprint, so try not to get daunted by the process right away. You can be more “tactical” (a rapid response) as the circumstances may require once you establish process. An Example Let’s take an example of one risk exposure for a telemedicine program -- credentialing and privileging -- and illustrate how we can help answer the “Am I covered…” question by using the risk management process (the first three steps anyway; I’ll leave the “monitor/re-evaluate” steps to you). Risk Identification What do we already know about some of the key risk exposures related to telemedicine? We recognize that they generally may include the following, but you’ll need to also identify any other risks unique to your program:
  • Information privacy/data security
  • Potential technology-related issues (e.g., failure, resolution, accuracy, etc.)
  • Data ownership/retention/destruction/e-discovery
  • Credentialing/privileging of providers
  • Patient selection, communication, education
  • Documentation of communication and encounters
  • Consent for use of technology/treatment
  • Health insurance payer coverage/reimbursement
  • Billing
  • Compliance
  • Contracts/agreements
  • Liability insurance coverage
Analyzing the Risk Here’s a series of questions that I’d like to know the answers to as your carrier in evaluating you as a “risk.” You should have done your due diligence already. The sample list is not meant to be exhaustive, but illustrates developing a discipline around a comprehensive analysis:
  • Does your governing body understand its role and responsibilities regarding credentialing and privileging? (Standards of care, scope of practice, negligent credentialing, non-delegable duty, D&O, E&O, etc.) Even when a third-party credentialing verifications organization is used, privileging decisions are still the responsibility of the hospital’s governing body.
  • Are you a hospital telemedicine site? A free-standing telemedicine site? An originating site or distant site? Is there a solid understanding about how telemedicine credentialing and privileging should be done depending on your role?
  • Have you met all applicable state statutory requirements for a telemedicine license? How about applicable state medical/licensure board requirements for all licensed practitioners? Non-licensed? What are the exceptions to requirements?
  • Have you met all accreditation standards and requirements (if applicable)?
  • Are there written policies and procedures in place that outline the credentialing and privileging appointment and reappointment processes, criteria for telemedicine activities, scope of practice protocols, OPPE and FPPE and the data from the originating and distant sites that must be collected and analyzed (and acted on)? How will the originating site be notified of privilege or license revocations or suspensions?
  • Do your hospital or medical staff bylaws and rules and regulations accurately reflect these issues for telemedicine: federal regulatory and state specific licensure requirements (for all providers); current CMS CoPs; health insurer panel requirements for credentialing; the privileged provider’s medical staff category?
  • Does the current structure of your quality-improvement and peer review programs adequately protect shared data between originating and distant sites?
  • Is there a compliance plan that addresses billing practices and regulatory noncompliance?
  • Are there written agreements in place between the distant site hospital or entity and the hospital seeking services?
  • Do you have the appropriate medical staff leadership and medical staff services resources to manage your credentialing and privileging activities?
  • Are there appropriate, required governing body, medical staff leadership and medical staff services training and education on credentialing and privileging for telemedicine?
Analyzing Insurance Coverage By way of this next series of questions, you can begin to bring into focus some possible strategies for treating telemedicine risk exposures by using the risk transfer technique to ensure you have the right coverage for those risks:
  • Will existing health professional liability (HPL) coverage address errors and omissions on the part of distant-site physicians and practitioners?
  • Do you have adequate coverage limits for the telemedicine activities and providers?
  • Are there subcontractors allowed? If so, confirm the subcontractor’s insurance coverage and limits.
  • Are there shared or individual coverage limits? Which should be required to meet your risk financing needs?
  • Are there any jurisdictional limits for coverage under the policy? State-to-state provider licensure?
  • Are there practitioners practicing outside of the U.S.? Do you have any exposure in countries outside of the U.S.? Will (HPL) coverage extend to such services?
  • Is your insurance carrier licensed to write coverage in multiple states? Foreign countries?
  • What types of insurance coverage are in place for negligent credentialing, where claims are based on reliance on credentialing and privileging information submitted by a distant-site hospital or distant-site telemedicine entity?
  • Does the hospital have appropriate coverage for business disruption (the third-party vendor stops offering the service)?
  • Are insurance coverages (at limits set in medical staff bylaws approved by the governing body) in place for credentialing and privileging legal exposures:
    • Does the telemedicine provider have professional liability coverage for this service?
    • Does the telemedicine provider have cyber risk coverage?
    • Does your cyber risk/technology E&O coverage extend to the telemedicine activities?
    • What types of insurance coverages and limits will be required by contract of the distant-site hospital and distant-site telemedicine entities?
    • Will the contract preclude shared coverage among the (ever-changing) list of care providers and the distant-site hospital or distant-site telemedicine entity?
    • Will the insurance coverage include indemnification coverage for cost of defense up to the point of disposition of a regulatory investigation based on the telemedicine services furnished by the distant-site hospital or distant-site telemedicine entity?
  • Review of your insurance coverages should extend to the various layers of an insurance program – excess carriers – and also to specialty programs such as RRGs, insurance trusts and captive insurance plans.
In Summary How can you develop a discipline of comprehensive risk identification, analysis, treatment and evaluation to address risks such as telemedicine credentialing? Here are some key steps in the process to consider:
  • First, create or gather your “Operational/ERM Committee” (or whatever you wish to call it). There should be a core group for this, and then perhaps create ad hoc “tactical taskforces” for each specific issue as necessary. In the case of telemedicine, this may mean consulting with internal and external resources such as legal , insurance carrier, risk managers, medical staff, medical staff services, marketing, finance, regulatory, accreditation, quality, compliance and IT professionals.
  • Gather data and input. As a side note, this is an area where some insurance carriers provide valuable tools and resources for their clients to use; for example, OBPI has a web-based Telemedicine Risk Assessment Tool that will efficiently perform a “risk inventory” to help in your data gathering and risk identification. So be sure to tap into that expertise.
  • Create/document your “risk inventory” (include all of the domains of risk).
  • Perform your risk “gap analysis.” Where are your weaknesses or gaps?
  • Determine your risk evaluation (for each weakness/gap determine the domains affected and the degree of impact to the organization).
  • Decide how you wish to treat each risk exposure (the proposed risk treatment technique and proposed action plan). Develop or revise your written operational plans, policies and procedures and credentialing and privileging services agreements as necessary.
  • Talk to your producer/agent to determine what type of coverage, at what limits of liability, are best for your organization.
  • Put your requirements for coverage in your organization bylaws or medical staff bylaws.
  • Monitor, measure (develop dashboards/scorecards), re-evaluate and re-design as necessary.
In the End To circle back to the initial question, “Am I covered?”, as you can see, the answer may not be a simple, immediate “yes.” “It depends” is what you may be more likely to hear. But despair not. Remember, we’re all risk managers, in this together, trying to protect the assets of the organization by treating risk exposures adequately and effectively. Be prepared to present your ERM analysis and treatment results in response to questions (or, even better, preemptively present them when you ask the question about insurance coverage), and you’ll be on your way to an answer that is hopefully more like, “Of course you’re covered. Your telemedicine program is a good thing,” than, “Houston, we have a problem.” And, you, your carrier and your broker can all breathe a sigh of relief and get a good night’s sleep.

Patricia Hughes

Profile picture for user PatriciaHughes

Patricia Hughes

Patty Hughes is responsible for OBPI’s risk management resources and services for healthcare policyholders. Recognizing that the role of the healthcare risk manager is changing, Hughes continuously strives to develop and position offerings in a unique way that support the evolution of this role across all healthcare settings and that help to demonstrate the value of an effective risk program in an organization.

The Gristle in Dodd-Frank

An unintended consequence of Dodd-Frank means that some insurers are unfairly being held to capitalization standards designed for banks.

I love using the phrase “unintended consequences” when talking about our issues on Capitol Hill. It’s so commonly understood among veteran staffers that legislative actions produce market reactions, some that are unexpected and unintended. Whoops!
Sometimes these unintended consequences are significant, like when Congress passed the behemoth rewrite of financial regulations in the Dodd-Frank Act. A big unintended consequence of that law gave the Federal Reserve the authority to regulate non-bank “systemically important financial institutions” (SIFI), as designated by the Financial Stability Oversight Council (FSOC), with the same capital standards that they impose on banks. Insurance companies at risk of being regulated by the Federal Reserve, like MetLife, Prudential and AIG, are facing the big threat of being held to an additional layer of capital standards that are bank-centric and threaten their regulatory compliance models and ultimate product safety. The thing is, the business of insurance is very different from banking, and regulatory capital standards designed to protect consumers should reflect those differences. Property-casualty and life insurance products are underwritten with sophisticated data and predictable global risk-sharing schemes that inherently withstand most market fluctuations. And to protect consumers, different capital standards are imposed on insurance companies for the different models and products they produce. Traditional banks, however, have different economic threats, requiring different standards. There cannot be a run on insurers with claims the way there can be on banks. The last economic crisis demonstrated that varying insurance capital standards protected the insurance industry throughout the global debacle. Even AIG’s insurance operations were well protected (it was AIG’s non-insurance financial products division that led to the company’s near-demise). Allowing the Fed to regulate insurers with the same standards as banks not only threatens corporate compliance models but also ultimately makes it more expensive for insurers to share risk, increases the cost for the same level of coverage and spikes prices for consumers. Even the congressional authors of the too-big-to-fail language recognize the issue and are pushing to correct it. Sen. Susan Collins, R-Maine, who originally wrote the Dodd-Frank provision to allow the FSOC to designate insurance companies as SIFIs, recognizes that any capital standards imposed by the Fed should be duly tailored for insurance companies. She said in congressional testimony: “I want to emphasize my belief that the Federal Reserve is able to take into account—and should take into account—the differences between insurance and other financial activities…. While it is essential that insurers subject to Federal Reserve Board oversight be adequately capitalized on a consolidated basis, it would be improper, and not in keeping with Congress’s intent, for federal regulators to supplant prudential state-based insurance regulation with a bank-centric capital regime for insurance activities.” Fed Chair Janet Yellen, who is responsible for implementing the law, agrees. So there’s now legislation in the grinder designed to fix the problem by giving the Fed flexibility to tailor capital standards to the unique characteristics of the insurance industry. The bill passed the Senate without opposition but at the time of this writing is stalled in the House and risks being caught in the partisan battle between the House and Senate’s varying legislative vehicles. It’s rightly frustrating to stakeholders and lawmakers that the fix is held up, but it’s not surprising that another serious unintended consequence is facing our industry. I’ve used the term when discussing the Foreign Account Tax Compliance Act (FATCA), flood reform, and the Affordable Care Act (ACA). I hope we can see the legislative fix to this latest unintended consequence signed into law soon. This article first appeared in Leader’s Edge magazine.

Joel Kopperud

Profile picture for user JoelKopperud

Joel Kopperud

Joel Kopperud is the Council of Insurance Agents & Brokers’ vice president of government affairs. He focuses on legislative and regulatory activity affecting employer-provided benefits, property/casualty insurance regulation and federal natural catastrophe policies. He is a regular contributor to Leader’s Edge magazine.

Core Transformation – Start Your Engines!

Insurers recognize the need for core transformation but need three technical capabilities to be able to keep up with changes in the market.

Ready, GO, set! That might not describe every core modernization project, but it certainly can seem that way in today’s fast-moving environment. Now that the insurance industry recognizes modernization as an indispensable tool for remaining competitive, it is worthwhile to take a step back and look at the technical capabilities that insurers really need from modern core systems to fulfill the potential of core transformation. First, it helps to define what exactly a modern system is today. This is trickier than it sounds because the definition of “modern” has changed in recent years – and will continue to change. Core systems that were considered modern in 2010 are already showing their age, as recent systems have far outpaced their capabilities. Strategy Meets Action (SMA) defines a modern system as an application that includes robust configuration capabilities usable by both IT and business users, uses standardized application programming interfaces (APIs) to facilitate integration of new systems and technologies and leverages service-oriented architecture (SOA) principles to enable scalability. Each of these capabilities is crucial to being able to use core modernization as a launch pad for core transformation. In our recent study on trends in policy administration, an amazing 94% of P&C insurers reported that product configuration capabilities are a required feature in a new policy administration system. This is a requirement across the industry for all core systems. Configuration capabilities are so important because they are critical to accelerating speed to market and speed to service. An insurer’s ability to react to market changes and take advantage of new opportunities is limited by the amount of time it takes internally to roll out new products and services or modify existing ones. Robust configuration tools not only make configuration easier but also spread it throughout a wider portion of the company, reducing the bottlenecks that often occur when all changes must be coded by a programmer. In a market where products and services are more personalized, configuration within policy, billing and claims becomes an essential tool. Core systems today are required to dynamically integrate with various internal and external systems and new and real-time data, as well as big data. SMA’s research reveals that, as the sophistication of the solutions on the market has grown, insurers are often using third-party solutions for ancillary systems like business intelligence, agent portals, new business and document management. The number of data sources continues to grow significantly each year, including maturing and emerging technologies like telematics and the Internet of Things (IoT) that generate large quantities of data ripe for analysis. Data from these and other new technologies have myriad uses, including to rate a risk, personalize a service or present a new product offering. Specific solutions depend on integration with the core, and reducing the time and effort demanded by integration promotes the consideration of, for example, a business intelligence (BI) solution capable of vastly more in-depth analysis than the integrated BI component of a PAS, or an external data source that provides information that an insurer otherwise could not use. Easing the friction of integration benefits insurers well into the future, because easier integration today also translates to less arduous integration with systems and applications not yet imagined. The most modern solution on the market is only as good as the book of business it can manage. With transaction volume and speed increasing, insurers must have scalable systems capable of meeting new and increasing demands, which requires leveraging SOA principles. Not only do insurers gain the ability to use a modern system’s capabilities with an increasing number of transactions, they also can extend the life of the system by enabling it to expand. When demand spikes after a catastrophic event or when a new market opportunity is identified, the system is well prepared to manage it. Cloud capabilities are presenting real alternatives to provide the scalability needed to successfully handle peak workloads, both anticipated and unexpected. These three critical components of a modern system are not just what insurers need today – they prepare insurers to adapt to the future. Modern core systems with these capabilities can evolve along with the insurer. Insurers need to be able to shift their technology environment as needed to meet the future’s unknown opportunities and challenges, and that requires the ability to create products, integrate the latest systems and technologies and scale in concert with an insurer’s book of business. When the need arises for integration of artificial intelligence, for example, or processing millions more transactions per hour than ever before, the capacity is already ready and waiting. Once insurers know they can depend on their core systems to support them through market changes, they can focus their attention on optimizing their current processes and innovating to become a Next-Gen Insurer. The order is important – ready, set, go. No matter how fast you want to move, you need to plan and then execute! Don’t let technology drive you – embrace the change and adopt technology with your future vision in sight.

Karen Furtado

Profile picture for user KarenFurtado

Karen Furtado

Karen Furtado, a partner at SMA, is a recognized industry expert in the core systems space. Given her exceptional knowledge of policy administration, rating, billing and claims, insurers seek her unparalleled knowledge in mapping solutions to business requirements and IT needs.

3 Keys to Achieving Sound Governance

Practitioners of enterprise risk management need to push for good governance because it reduces the biggest risks a company faces.

Of the many definitions of governance, the simplest ones tend to have the most clarity. For the purpose of this piece, governance is a set of processes that enable an organization to operate in a fashion consistent with its goals and values and the reasonable expectations of those with vested interests in its success, such as customers, employees, shareholders and regulators. Governance is distinct from both compliance and enterprise risk management (ERM), but there are cultural and process-oriented similarities among these management practices. It is well-recognized that sound governance measures can reduce the amount or impact of risk an organization faces. For that reason, among others, ERM practitioners favor a robust governance environment within an organization. A few aspects of sound governance are worth discussion.  These include:  1) transparency and comprehensive communications, 2) rule of law and 3) consensus-building through thorough vetting of important decisions. Transparency  Transparency lessens the risk that either management or staff will try to do something unethical, unreasonably risky or wantonly self-serving because decisions, actions and information are very visible.  An unethical or covert act would stand out like the proverbial sore thumb. Consider how some now-defunct companies, such as Enron, secretly performed what amounted to a charade of a productive business. There was no transparency about what assets of the company really were, how the company made money, what the real financial condition actually was and so on. Companies that want to be transparent can:
  • Create a culture in which sharing of relevant data is encouraged.
  • Publish information about company vision, values, strategy, goals and results through internal communication vehicles.
  • Create clear instructions on a task by task basis that can used to train and be a reference for staff in all positions that is readily accessible and kept up to date.
  • Create clear escalation channels for issues or requests for exceptions.
Rule of Law Good governance requires that all staff know that the organization stands for lawful and ethical conduct. One way to make this clear is to have “law abiding” or “ethical “as part of the organization’s values. Further, the organization needs to make sure these values are broadly and repeatedly communicated. Additionally, staff needs to be trained on what laws apply to the work they perform. Should a situation arise where there is a question as to what is legal, staff needs to know to whom they can bring the question. The risks that develop out of deviating from lawful conduct include: financial, reputational and punitive. These are among the most significant non-strategic risks a company might face. Consider a company that is found to have purposefully misled investors in its filings about something as basic as the cost of its raw materials. Such a company could face fines and loss of trust by investors, customers, rating agencies, regulators, etc., and individuals may even face jail time. In a transparent organization that has made it clear laws and regulations must be adhered to, the cost or cost trend of its raw materials would likely be a well documented and widely known number. Any report that contradicted common knowledge would be called into question. Consider the dramatic uptick of companies being brought to task under the Foreign Corrupt Practices Act (FCPA) for everything from outright bribes to granting favors to highly placed individuals from other countries. In a transparent organization that has clearly articulated its position on staying within the law, any potentially illegal acts would likely be recognized and challenged. How likely is it that a highly transparent culture wherein respect for laws and regulations is espoused would give rise to violations to prominent laws or regulations? It would be less likely, thus reducing financial, reputational and punitive risks. The current increase in laws and regulations makes staying within the law more arduous, yet even more important. To limit the risk of falling outside the rule of law, organizations can:
  • Provide in-house training on laws affecting various aspects of the business.
  • Make information available to staff so that laws and regulations can be referenced, as needed.
  • Incorporate the legal way of doing things in procedures and processes.
  • Ensure that compliance audits are done on a regular basis.
  • Create hotlines for reporting unethical behavior.
Consensus-Building Good governance requires consultation among a diverse group of stakeholders and experts. Through dialogue and, perhaps some compromise, a broad consensus of what is in the best interest of the organization can be reached. In other words, important decisions need to be vetted. This increases the chance that agreement can be developed and risks uncovered and addressed. Decisions, even if clearly communicated and understood, are less likely to be carried out by those who have not had the chance to vet the idea. Consider a CEO speaking to rating agency reviewers and answering a question about future earnings streams. Consider also that the CFO and other senior executives in separate meetings with the rating agency answer the same question in a very different way. In this scenario, there has clearly not been consensus on what the future looks like. A risk has been created that the company’s credit rating will be harmed. To enhance consensus-building, companies can:
  • Create a culture where a free exchange of opinions is valued.
  • Encourage and reward teamwork.
  • Use meeting protocols that bring decision-making to a conclusion so that there is no doubt about the outcome (even when 100% consensus cannot be reached).
  • Document and disseminate decisions to all relevant parties.
During the ERM process step wherein risks are paired with mitigation plans, improved governance is often cited as the remedy to ameliorate the risk. No surprise there. Clearly, good governance reduces risk of many types. That is why ERM practitioners are fervent supporters of strong governance.

Donna Galer

Profile picture for user DonnaGaler

Donna Galer

Donna Galer is a consultant, author and lecturer. 

She has written three books on ERM: Enterprise Risk Management – Straight To The Point, Enterprise Risk Management – Straight To The Value and Enterprise Risk Management – Straight Talk For Nonprofits, with co-author Al Decker. She is an active contributor to the Insurance Thought Leadership website and other industry publications. In addition, she has given presentations at RIMS, CPCU, PCI (now APCIA) and university events.

Currently, she is an independent consultant on ERM, ESG and strategic planning. She was recently a senior adviser at Hanover Stone Solutions. She served as the chairwoman of the Spencer Educational Foundation from 2006-2010. From 1989 to 2006, she was with Zurich Insurance Group, where she held many positions both in the U.S. and in Switzerland, including: EVP corporate development, global head of investor relations, EVP compliance and governance and regional manager for North America. Her last position at Zurich was executive vice president and chief administrative officer for Zurich’s world-wide general insurance business ($36 Billion GWP), with responsibility for strategic planning and other areas. She began her insurance career at Crum & Forster Insurance.  

She has served on numerous industry and academic boards. Among these are: NC State’s Poole School of Business’ Enterprise Risk Management’s Advisory Board, Illinois State University’s Katie School of Insurance, Spencer Educational Foundation. She won “The Editor’s Choice Award” from the Society of Financial Examiners in 2017 for her co-written articles on KRIs/KPIs and related subjects. She was named among the “Top 100 Insurance Women” by Business Insurance in 2000.

Stunning Patterns Found in the Dark Net

Counterintelligence in the Dark Net finds that China is getting a bad rap on hacking but that lots of unexpected, dangerous alliances are forming.

|
One of the most powerful technologies for spying on cyber criminals lurking in the Dark Net comes from a St. Louis-based startup, Norse Corp. Founded in 2010 by its chief technology officer, Tommy Stiansen, Norse has assembled a global network, called IPViking, composed of sensors that appear on the Internet as vulnerable computing devices. These “honeypots” appear to be everything from routers and servers, to laptops and mobile devices, to Internet-connected web cams, office equipment and medical devices. When an intruder tries to take control of a Norse honeypot, Norse grabs the attacker’s IP address and begins an intensive counterintelligence routine. The IP address is fed into web crawlers that scour Dark Net bulletin boards and chat rooms for snippets of discussions tied to that IP address. Analysts correlate the findings, and then IPViking displays the results on a global map revealing the attacking organization’s name and Internet address, the target’s city and service being attacked and the most popular target countries and origin countries. Stiansen grew up tinkering with computers on a Norwegian farm, which led him to a career designing air-traffic control and telecom-billing systems. After immigrating to the U.S. in 2004, Stiansen began thinking about a way to gain a real-time, bird’s-eye view of the inner recesses of the Dark Net. The result was IPViking, which now has millions of honeypots dispersed through 167 data centers in 47 countries. Norse recently completed a major upgrade to IPViking, which has led to some stunning findings. Stiansen explains: Tommy Stiansen - NorseCorp 3C: Can you tell us about your most recent milestone? Stiansen: We have managed to do a tenfold (increase) to where we can now apply millions of rules in our appliance. 3C: So more rules allow you to do what? Stiansen: It allows us to have a lot more threat data and apply a lot more intelligence to a customer’s traffic. We can start applying more dynamic data. Our end goal is to apply full counterintelligence onto traffic. Meaning when we see a traffic flow coming through our appliance we will be able to see the street address, the domain, the email address used to register this domain. We can see who a packet is going to, and the relationship between the sender and receiver, all kinds of counterintelligence behind actual traffic, not just for blocking but for visualization. 3C: That level of detail was not available earlier? Stiansen: Nope. This is something we’ve pioneered. This is our platform that we built so we can enable this (detailed view) to actually happen. 3C: So what have you discovered? Stiansen: We’re learning that traffic and attacks coming out of China isn’t really China. It’s actually other nations using China’s infrastructure to do the attacks. It’s not just one country, it’s the top 10 cyber countries out there using other countries’ infrastructure. 3C: So is China getting a bad rap? Stiansen: Correct. 3C: Who’s responsible? Russia? The U.S.? North Korea? Stiansen: Everyone. 3C: What else are you seeing? Stiansen: We’re also seeing how hackers from certain communities are joining together more and more. The hacking world is becoming smaller and smaller. Iranian hackers are working with Turkish hackers. Pakistani and Indian hackers, they’re working together. Indonesia hackers and Iranian hackers are working together. 3C: Odd combinations. Stiansen: It’s weird to see these mixes because there’s no affiliation, there’s no friendship between the countries on a state level. But the hacker groups are combining together. The borders between hackers have been lifted. 3C: What’s driving them to partner, is it money or ideology? Stiansen: All of the above. That’s the thing, the people who have similar ideologies find each other on social media and start communicating with each other. And the people with the financial means and shared goals meet each other, that’s the evolution. And when they do that, they become really powerful.

Byron Acohido

Profile picture for user byronacohido

Byron Acohido

Byron Acohido is a business journalist who has been writing about cybersecurity and privacy since 2004, and currently blogs at LastWatchdog.com.

Laying the Foundation for Drug Formularies

Drug formularies hold great promise to control costs and improve treatment, but they must be phased in carefully, over years.

When Texas announced an 80% drop in the cost of “N” drugs prescribed for new injuries, workers’ compensation stakeholders took notice. (Medications designated as “N” in the Official Disability Guidelines are not appropriate for first-line therapy.) Since that announcement, the implementation of a closed formulary has placed near the top of the list on several state legislative agendas. While the results being reported out of Texas are still fairly recent, the concept of a closed formulary is not a new idea in that state. Although changes in Texas’ work comp medical cost trends appear sudden, the process for achieving these was anything but. When HB 7 was passed in 2005, it created the Division of Workers’ Compensation (DWC) within the Texas Department of Insurance and, among other things, authorized “evidence-based, scientifically valid and outcome-focused” medical treatment guidelines and a closed formulary for prescription medications. These steps, along with the existing preauthorization and dispute-resolution processes, provided the solid regulatory infrastructure needed to implement a successful closed formulary. The Texas Closed Formulary (TCF) requires preauthorization for medications identified as “N” drugs in the current edition of the Work Loss Data Institute’s Official Disability Guidelines (ODG). These guidelines are updated on a monthly basis to encompass new medications and new research surrounding current medications. The TCF excludes not only “N” drugs but also any compound medication that contains an “N” drug, as well as experimental drugs that are not yet broadly accepted as the prevailing standard of care. Naturally, implementing these requirements would mean a substantial change in prescribing habits. (That was the point.) The problem was that immediate and strict implementation could mean that injured workers were suddenly denied previously prescribed medications without allowing proper time for weaning. To counter this problem, the DWC created a “legacy period” during which older claims would not yet be subject to the closed formulary, even while providers had to comply with formulary requirements when treating newly injured patients. This approach allowed providers to adapt to the new preauthorization requirements and adjust their treating habits over time in existing claims. At the same time, it ensured formulary compliance from the outset in new claims. After the conclusion of the two-year legacy period, all claims became subject to the TCF. In effect, this legacy period was a compromise that allowed Texas to begin implementing the TCF in all of its claims without hurting patients already on long-term prescription therapy. The first (and, to date, only) state to attempt to replicate the Texas model was Oklahoma. Oklahoma followed the Texas model closely and, in some places, added improvements. For example, while the TCF excludes all compound medications containing an “N” drug, Oklahoma's closed formulary excludes all compound drugs, regardless of ingredients. Unfortunately, there are also some drawbacks – the main one being limited application. Because the Oklahoma Closed Formulary is contained within the rules for Oklahoma’s new Workers’ Compensation Commission, it applies only to those cases within the commission’s jurisdiction. The commission has jurisdiction over all claims with a date of injury from Feb. 1, 2014, on. Older claims are handled by the Workers’ Compensation Court of Existing Claims, which has no closed formulary provision. This means that a doctor treating a worker who was injured on Jan. 31, 2014, and another who was injured on Feb. 1, 2014, will only have to abide by evidence-based treatment guidelines for the second worker. While Oklahoma has adopted medical treatment guidelines and taken steps to require preauthorization, these requirements are relatively new within the Oklahoma workers’ compensation system. As a result, providers, patients and payers are still adjusting to the new system, and there has been a fair amount of confusion. Implementing a successful closed formulary does not happen overnight. Texas started the process 10 years ago and has been consistently working to ensure that its reforms were successful. After taking the time to establish the necessary regulatory infrastructure, adopt treatment guidelines and create a logical solution to ensure a unified standard of care issue across all claims, the state is finally seeing clinical and economic benefits. As Arkansas, California, North Carolina, Tennessee and other states start thinking about replicating the results of Texas by implementing their own closed drug formularies, they would do well to have conversations about these principles first. This article was originally posted at: WorkCompWire.

Michael Gavin

Profile picture for user michaelgavin

Michael Gavin

Michael Gavin is president of PRIUM. He is responsible for the strategic direction and management of the medical intervention company. He brought considerable experience in several major sectors of the health care industry to PRIUM when he joined as chief operating officer in 2010, and he is the author of the thought-provoking Evidence-Based blog.

Microinsurance Has Macro Future

Big data will enable microinsurance by improving weather forecasts -- and farmers will thrive throughout the developing world.

"'We’ll all be rooned,’ said Hanrahan….” So goes the famous Australian bush poem by John O’Brien about the plight of farmers going from drought to flood to bush fire – one extreme weather situation after another. And though we are nearly 100 years on since that poem was written, we seem to be no further along in being able to predict weather with any certainty more than a few days into the future. In fact, extreme weather seems to be hitting more frequently and with greater ferocity because of the apparent effects of global climate change. The extended 2013 winter in Europe cost the economy there more than $7 billion – that is just from being cold for a month longer than usual. Extreme weather events have dominated the headlines, especially where they impinge on highly developed insurance markets such as North America and Europe. But from the perspective of the impact on human lives, the greatest risk lies in Asia and Africa, where a vast majority of people depend upon subsistence farming and there is very little penetration of traditional financial services. A number of governments in the region, in partnership with semi-government, educational institutions and private organizations, have established a range of programs to foster the development of sustainable microfinance and microinsurance services for the most at-risk segments of their communities. In India alone, there are more than 700 million farmers and farm workers who struggle with extreme weather risk every season. Building sustainable programs, now there's the trick! In one program in India that ran from the mid-'80s through to the end of the twentieth century, the cumulative premiums were $80 million, while the cumulative claims were $461 – hardly a sustainable proposition. In another program, a World Bank study showed that the microinsurance proposition was advantageous to farmers only in very extreme situations, so in most cases it was uneconomic for farmers to buy the insurance. From 2001, the Indian government has ensured the growth of microinsurance through a regulatory framework set up as part of the entry of private insurers into the market. Popular products in the sector are weather index policies, where payouts occur if rainfall is below a trigger level, in a particular area. The premium for these types of policy have proven to be expensive, but, with government subsidies and an education program, awareness and acceptability of this kind of financial service have grown in communities in rural India. Governments in China, Bangladesh, Indonesia and the Philippines are following suit, by introducing their own agricultural insurance programs. The major problem for insurers writing this kind of business is getting a good handle on the risk, to enable correct pricing. For the most part, insurers have to rely on historical data, which really only establishes a wide range of outcomes; with extreme weather trends continuing, insurers tend to be very conservative in risk pricing. This is where big data and analytics come in. In the commodities sector, at least one player has marketed reports that help predict the price of commodity futures. A case in point is the recent U.S. drought. By using National Oceanic and Atmospheric Administration (NOAA) and NASA remote Earth-sensing data, and coupling with advanced climate predictive analytics, the U.S. drought was predicted three months in advance of the U.S. government's declaring drought. The model enabled the assessment of the weather impact on particular areas of the U.S., as well as the impact on the particular commodity crop grown in that area (corn), and, consequently, a prediction of the price of the commodity at harvest time based on the expected overall yield, with drought factored in. Currently more than 15 petabytes of public data is produced annually that is global in nature, and the amount of data is set to increase to more 300 petabytes as more satellites come on line over the next few years. The computing power and technology to cope with huge data sets continues to improve each year with big data solutions. These rich data sources and new technology solutions represent an unparalleled opportunity for governments and communities to turn microinsurance from a subsidized, unprofitable activity, to a sustainable model to spread economic stability and prosperity. By enabling the weather risk to be more accurately modeled, underwriters will be able to price policies on a more accessible basis. Studies are showing that, where microinsurance is in operation, microfinancing is supported, as farmers can have certainty around being able to meet their loan commitments. These financial services are being used by farmers to improve their farms’ yield by investing in appropriate weather risk mitigation (irrigation, soil moisture conservation, etc) and productivity enhancements (planting automation, genetically modified seeds, fertilizers, improved pest control, etc). This virtuous cycle lifts this sector from depending on subsidies and government programs to being commercially viable and self sustaining. Definitely a win, win, win proposition. Far from the pessimistic, doom-mongering of Hanrahan, I see a world more in line with Peter Diamandis’s vision as outlined in his book Abundance: The Future Is Better Than You Think. I don’t know about you, but I for one would love the bragging rights to say my industry is helping to improve the lives of billions on planet Earth, while still making a commercially reasonable profit. Hey, I wonder if it’s going to rain today?

Andrew Dart

Profile picture for user andrewdart

Andrew Dart

Andrew Dart is a partner with The Digital Insurer. He was previously the sole insurance industry strategist for CSC in AMEA and one of CSC’s “ingenious minds” globally. With more than 30 years of international insurance experience, Dart has worked in Asian cities, including Tokyo, Jakarta, Singapore and Hong Kong.