Tag Archives: business intelligence

How to Cut P&C Claims Leakage

In their search to boost profits and reduce their loss ratio, property and casualty (P&C) insurance carriers often turn to improving a cast of “usual suspects”: sales, pricing, new product development and a host of operational areas from new business through subrogation. But the biggest area to target— the one with the largest, near-term upside potential—is claims processing. Every insurer wants to reduce operating costs, cut claims leakage and reduce claim severity.

But what’s the best approach?

That depends on whom you ask. Technology providers insist that bleeding-edge, massive new systems are the answer. Internal processing teams will push for more human resources—with more relevant experience and better training. Other executivess will tell you to focus on reducing claims fraud.

But if you ask The Lab, we will say that the best approach is to keep asking questions, because the answers will point you toward a massive payback—a windfall. For example, what is the “standard” P&C claims leakage ratio, i.e., the industry average benchmark? And what is the source for this leakage number? Probably, the answer you get on leakage ratio will fall in the 2% to 4% range. Press for the source. The likely answer will be vague and hard to pin down. It’s unlikely that the answer will be: “our routine analysis and measurement of our claims processing operations—at the individual adjuster level.”

Stated differently, the source is actually “conventional industry wisdom.” If so, you’ve stumbled into a diamond field of improvement opportunities. To scoop them up, all you need to do is upgrade your company’s ability to perceive and manage claims processing at an unprecedented level of granular detail.

It’s worth the heavy investment of initially tedious effort. That’s because actual claims leakage is typically several multiples of this conventional-wisdom average of 2% to 4%: The Lab routinely documents 20% to 30%, and even more. That means that the payoff for reducing leakage, even for smaller P&C insurers, can easily reach hundreds of millions of dollars—which drop straight to the bottom line.

No, customer experience isn’t devastated. That’s because other (completely satisfied) policyholders are having their claims paid by adjusters who follow the carrier’s guidelines. The lower-performing adjusters, on the other hand, are simply not following these guidelines, and carriers fail to practice the process-management discipline necessary to ensure that all adjusters adhere to the loss-payment rules and targets.

Now, if you ask The Lab precisely how to reduce your claims leakage and loss ratio, we will point to three underused tools, or improvement approaches, to help P&C insurers surmount this challenge and achieve breakthrough levels of benefits, specifically:

  1. Knowledge work standardization (KWS)
  2. Business intelligence (BI)
  3. Robotic process automation (RPA)

While the second two—BI and RPA—require a nominal amount of technology, the first approach, standardization, not only paves the way for the other two but also requires no new technology whatsoever. Typically, Knowledge Work Standardization, or KWS, alone delivers labor savings in excess of 20%, easily self-funding its own implementation— and readily covering much of the BI and RPA improvement costs. Taken together, these three tools rapidly transform an insurer’s P&C claims-processing operation and upgrade its related management capability. This allows management to significantly reduce loss payments while simultaneously improving operating efficiency. The result is an increase in “operating leverage”: the capability of a business to grow revenue faster than costs.

Interestingly, these three tools, or improvement approaches, also deliver major benefits for customer experience, or CX, aiding in policyholder satisfaction and retention. Here’s how:

  • First, roughly half of the hundreds of operational improvements identified during business process documentation will also deliver a direct benefit to policyholders.
  • Second, the process documentation and data analysis help pinpoint the reasons that policyholders leave. The predictive models that result help reduce customer erosion.
  • Third, these documentation and analytical tasks also identify the most advantageous opportunities for cross-selling and upselling. In this article, we will cover these three tools/improvement approaches broadly, then we’ll drill down to explore their real- world application—and benefits—in P&C claims processing.

1: The Search for Standardization in P&C Insurance Operations

Standardization—the same innovation that gave rise to the modern factory system—is arguably the most overlooked improvement tool in insurance operations today. And it applies to everything: data, processes, work activities, instructions, you name it. In other words, variance is standardization’s costly, inefficient evil twin. Consider:

  • Insurance operations performance is typically reported
    in the form of averages. These numbers are usually calculated for work teams or organizations. And this is also how supervisors approach their management task—by groups. Individuals’ performance is rarely measured, compared, benchmarked or managed.
  • Rules of thumb routinely apply. “Here’s how many claims an adjuster should be able to process in a given day or month.”
  • Industry lore trumps data-driven decision-making: “Claims processing is an art, not a science.” Or, even more dangerously: “Faster adjusters are the costliest ones, because they’ll always pay out too much.” (Spoiler alert: The opposite is true.)
  • Differences in details go unexploited: At one insurer, for example, The Lab discovered that five teams were processing claims—and each team used its own format and guidelines for notes. That single, simple issue confounded everyone downstream, as they struggled to reconcile who meant what.
  • “NIGO” prevails. The sheer opportunity cost of things like forms and fields submitted “not in good order,” or NIGO, can be staggering—often with tens of millions of dollars in unrecouped revenue flying just below executives’ radar.

2: Applying Business Intelligence, or BI, to Insurance Operations

Modern BI applications derive their power from their ability to create a clear picture from crushingly vast quantities of seemingly incompatible data. The best BI dashboards visualize this data as insightful, inarguable business-decision information, updated in real time. They let users zoom out or drill down easily; just think of Google maps. You can click from a state, to a city, to a house, then back up to a continent, using either a graphical map format or 3-D satellite photo.

Then why aren’t insurers routinely harnessing this power? Most already own one or more BI applications, yet they’re not delivering that critical Google-maps-style visualization and navigation capability.

This lack can be traced to two, intertwined obstacles: business data and business processes. Each requires its own, tediously mundane, routinely overlooked and massively valuable, non-technology solution: standardization.

See also: Provocative View on Future of P&C Claims  

Business data is already well defined—but it’s defined almost exclusively in IT terms. Think of the latitude/longitude coordinates on Google maps; do you ever actually use those? These existing IT definitions are difficult, if not impossible, to reliably link to business operations and thus produce useful, navigable business information.

The Lab solves this problem by mapping existing “core systems” data points to products, employees, transactions, cycle times, organizational groups and more. The solution requires standardizing the company organization chart, product names, error definitions and similar non-technology items. This is a tediously mundane task.

Technology can’t do this. But people can, in a few weeks if they have the right templates and experience.

Business processes are also already defined—but with wildly inconsistent scope. For example, the IT definition typically involves a “nano-scale” process—like a currency conversion or invoice reconciliation. Business definitions represent the polar opposite: global scale. Think of “order-to-cash” or “procure-to-pay.” All parties involved—throughout business and IT—thus talk past each other, assuming that everyone is on the same page. Worst of all, almost no business processes are documented. They exist informally as “tribal knowledge.”

The Lab solves this disconnect by mapping business processes, end-to-end at the same “activity” level of detail that manufacturers have perfected over the past century. Each activity is about two minutes in average duration. The range for all activities is wide but easily manageable: from a few seconds to five minutes. Over the past 25 years, The Lab has process-mapped every aspect of P&C insurance operations—and we’ve kept templates of every detail for these highly similar processes. Consequently, we can (and routinely do) map business processes remotely, via web conference… around the world!

Rigorously defining, standardizing, and linking business data and business processes underpins the best BI dashboards, delivering the Google-maps-style navigation that execs crave. This is how it’s possible to build astonishingly insightful BI dashboards that help make claims leakage losses apparent to our clients.

3: Robotic Process Automation, or RPA: A Powerful New Tool for P&C Carriers

Robotic process automation, or RPA, is simply software— offered by companies such as Automation Anywhere, Blue Prism, and UiPath—which can “sit at a computer” and mimic the actions of a human worker, such as clicking on windows, selecting text or data, copying and pasting and switching between applications. If you’ve ever seen an Excel macro at work, then you can appreciate RPA; it simply handles more chores and more systems. And it isn’t limited to a single application, like Excel. It is as free to navigate the IT ecosystem as any employee.

RPA “robots” are thus ideally suited for mundane yet important repetitious tasks that highly paid P&C knowledge workers hate to do. Better yet, robots work far faster than people, without getting tired, taking breaks or making mistakes. This frees up human workers for higher-value activities.

RPA also confers customer experience, or CX, benefits. With faster operations, customers enjoy the Amazon-style responsiveness they’ve come to expect from all businesses. On-hold times are reduced, claims get processed faster and the entire company appears more responsive.

Beyond the dual opportunities of knowledge-worker labor savings and CX lift, RPA holds the power to disrupt entire industries. Deployed creatively in massive waves, it can deliver windfall profits on a scale not even imagined by its purveyors.

Yet, today, most insurance companies’ RPA efforts, if any, are stalled at the very beginning; recent surveys indicate that internal teams hit a 10-bot barrier and struggle to find more opportunities, or “use cases.” That’s because the underlying processes to be automated are never made “robot-friendly” in the first place. So there needs to be scrutiny of the different activities—and the elimination of all of the wasteful ones that hide in plain sight, such as rework, return of NIGO input, and so on.

How to Overcome the 10-Bot Barrier in P&C Claims Processing

First, set expectations to focus on incremental automation with bots. No, you’re not going to replace an entire adjuster with a bot. But, yes, you will be able to quickly use a bot to call a manager’s attention to a high-payback intervention in the P&C claims-adjusting process. Examples:

  • Managers look for inactivity on open claims: If a claim is open with no activity in the last 10 days, that’s a red flag. But many claims are overlooked. A bot can call these out promptly.
  • Full-replacement cost, instead of partial replacement cost, is a major cause of overpayment that is most prevalent in roofing, flooring and cabinetry replacements. Bots can track payments and send management alerts based on line-of-coverage and even more granular detail. Roofing examples include: replacement, whole slope vs. whole roof; and number of roof squares replaced.
  • Audits are conducted on claims to improve quality and consistency—and to reduce overpayment. However, these are done on a very limited sample and only after claims have been paid and closed. Based on learnings from past audits, bots can alert management when certain claims- processing failures happen on a live basis. Managers can intervene… before payment.

Standardization (KWS), BI, & RPA: Focusing on P&C Claims Processing

All three of the above tools, or improvement approaches for P&C carriers—standardization or KWS; business intelligence, or BI; and robotic process automation, or RPA—can be readily applied to claims operations. Indeed, they seem to be custom- made for it.

Standardization

Consider the following story, created from a mashup of different P&C insurance carrier clients of The Lab:

This “insurer” had plenty of claims data to share with The Lab; in fact, theirs was better than most. But that’s not saying too much: While 40% of the data was usable and comprehensible, the other 60% wasn’t. (Remember: This is better than most P&C insurers.)

Data was reported weekly, and sometimes daily, on an organization-wide basis. Here’s what they had data to report on:

  • Overall averages of claims processed, based on total headcount.
  • Average losses paid per claim.

That said, the company never tracked the performance of individual claims processors. They were all effectively “self- managed,” following their own individual procedures. There were no standard, activity-level instructions and guidelines, set by management, for quantifying targets for time, productivity or effectiveness. There were, on the other hand, vague, directional methods, many in the form of undocumented “tribal knowledge” and “rules of thumb.” The claims processors simply managed their own workdays, tasks and goals—similar to Victorian-era artisans, prior to the advent of the factory system.

When pressed, the company defended its choice to not track individual performance. The two reasons it gave would come back to bite the managers:

  • They were confident that individual performance, if measured, would only vary by about 5% to 10%, maybe 15% at most.
  • They were equally confident that imposing time and productivity quotas on processors would increase loss severity. In other words, they were completely sure that faster claims processing equates to overpayment of claims.

However, their very own data contradicted both of these notions—in a huge way:

  • First, the “long tail” of claims processors revealed a 250% variance between the top and bottom quartiles of individual performers—that’s 15 to 50 times higher than what management believed to be the case. In other words, the top three quartiles were out-processing the bottom quartile so much that there was no hope of the bottom quartile catching up—even getting close to the average. Put another way: Reducing this variance alone would yield a 25% capacity gain—an operating expense savings. And it could be accomplished by the top performers’ simply processing just one more claim per day—an increase they’d barely even notice.
  • Second—and just as important—the data revealed that the slower performers actually overpaid each claim by an average of 50%, an amount that totaled in the scores of millions, swamping the amount spent to pay their salaries. This carrier was thus getting the worst of both worlds with its lowest performers: They were slower, and vastly more costly. Not only that, they dragged down the average performance figures (not to mention morale) of the faster, leaner producers.

The impact from these revelations equated to losses measured in hundreds of millions of dollars. Incidentally, the story above is not rare; rather, it’s typical. As we’d mentioned, it’s based on a mashup of several insurance carriers.

Here’s one other standardization eye-opener. The claims process itself was rife with rework, turnaround, pushback and error correction. As a claim made its way through reporting, contact, dispatch, estimating, investigation and finally payment, it bounced and backtracked between the FNOL (first notice of loss) team, the appraiser, the casualty adjuster and so on. When presented with this “subway map” of the as-is process, the insurer’s executive sponsors were aghast:

Fortunately, the “spaghetti mess” can be cleaned up, even without new technology.

Business Intelligence (BI)

The Lab often encounters P&C insurance companies that invest heavily in systems such as Oracle Business Intelligence or Microsoft Power BI yet struggle to get value from these advanced analytics platforms.

Many of the issues stem from failing to “complete the final mile” when it comes to data definitions and hierarchies; that is, companies aren’t reconciling the IT-defined data elements with their own business-defined operations characteristics. This problem can often be traced to a disconnect between business leaders and IT organizations.

An IT person could—and often does—assemble and manage business intelligence for business units. But the person needs to understand the business so well that the person could confidently select which data to use and aggregate so that the final KPI (key performance indicator) in the resulting dashboard represents reality. And even if the person managed to create a BI picture of perfect “reality,” there’s no guarantee that the business would accept it. Let’s be frank: Creating useful BI and related analytics is a towering challenge. It’s overwhelming not only to IT; most businesspeople lack both the documentation and the comprehensive perspective to pull it off. So, the status quo continues: The “business language” experts will talk with the “IT language” experts, and the business executives will still lack the Google-maps insights they seek.

See also: Keeping Businesses Going in a Crisis  

Another BI stumbling block is the “false precision” of too much data and too many categories. Consider the automotive insurer with “claims types gone wild”—such as “Accident: Right front fender,” and “Accident: Left front fender,” and so on. The Lab’s BI dashboards will often reveal to claims executives that 20% of the claims types represent 80% of the volume—another valuable, “long-tail” insight.

Robotic Process Automation (RBA)

As noted earlier, operational issues and customer-experience or CX challenges are typically two sides of the same coin. Often, both can be addressed by robots.

For example, consider the policyholder who calls the FNOL contact center and validates info. Then the person is handed off to another rep, who must re-validate the info. And then another. And another.

That’s not just an operational mess. It’s also creates a clear and present danger of losing that customer, hiding in plain sight.

While robots can speed repetitive chores, they can’t fix the underlying business processes (remember that FNOL spaghetti map, above?). Fortunately, Knowledge Work Standardization can. And once it does, the robotic possibilities are practically limitless: They span everything from sales prospecting to renewal notices to premiums/commissions reconciliation.

You saw how RPA bot deployments augmented the work of claims-processing managers. The next step is to augment the hands-on work of rank-and-file adjusters. Again, don’t try to replace the entire job position. Instead, augment the processor’s activities. In particular, hand off the adjuster’s mind-numbingly repetitive activities to the bot. This will allow the adjuster more time and thought—not to mention accountability—for complying with the policy’s payment guidelines.

For P&C claims, there are numerous opportunities to “park a bot” on top of routine, repetitive, knowledge-worker activity. Think of these as admin-assistant bots for adjusters. Here are two of many examples:

  • The “pre-adjudication assistant” bot. Adjusters spend lots of time sorting out “unstructured” information at the receipt of the FNOL. For example, they read descriptions of damage that arrive in free text data fields, then they standardize it and proceed to adjudication activities such as looking up coverages and setting reserves for the claim, prior to contacting the insured. Most, if not all, of these activities can be performed by RPA bots—but only if the inbound information is standardized. The Lab has used its KWS methods to create drop-down menus for this data and make it RPA-friendly. This standardization can be done incrementally, enabling bots to prep claims for adjusters: They look up coverage limits, set reserves and prep for the adjuster’s call to the insured.
  • The “customer contact assistant” bot. Adjusters, and others in the contact center, spend a great deal of avoidable and inefficient effort communicating with policyholders regarding their claims: advising status, notifying for damage inspections, obtaining corrections to initial NIGO information and more. Simply contacting customers can be a tedious, time-consuming and inefficient process; bots can help. They can be configured to send notifications to customers, preempting calls to the contact center. Bots can also initiate “text-call-text” notifications to customers’ cell phones. Here’s how it works: Bots, at the push of a button by the adjuster, send a text to the customer. The text may notify the customer to expect a call from the adjuster—avoiding call screening. The adjuster calls and gets through. Afterward, the bot sends a confirmation of the issue or next step.

Make the Move Toward Improved Insurance Operations & Reduced Loss Ratio

Claims processing, as we’d mentioned at the outset, is just one area within the P&C carrier organization where the power triumvirate of Knowledge Work Standardization (KWS), business intelligence (BI) and robotic process automation (RPA) rapidly deliver massive windfall value.

5 Scary Thoughts on BI, Data Warehouses

With Halloween just past, it seems appropriate to blog about something thematic. Usually, the word “scary” isn’t used to describe insurance writings, but there is a twist to one important question that can be as frightening as things that go bump in the night.

Often, a technology adoption discussion starts out with a question about why an insurer should adopt a specific technology. That’s a good question. But the more telling question may be: What happens if you don’t adopt it?

It’s a scary way to look at technology adoption, perhaps, but it is important to assess the implications of not adopting specific technologies. When it comes to business intelligence (BI) tools and data warehouse modernization, there are some very frightening downsides to not putting these critical components of an enterprise data strategy first.

See also: 4 Benefits From Data Centralization  

  • SMA research shows that 53% of responding insurers believe establishing a data strategy should precede a core technology initiative. That still leaves a good percentage of insurers who see things differently. And simply believing a data-first strategy is the right way to go doesn’t mean that executing it is easy. However, insurers who put off data strategies until after core system choices have been made actually run the risk of choosing the wrong provider (architecturally), relative to a data and warehouse strategy that would work best for their organization.
  • Migrating legacy data to modern technology has kept many an IT and business leader awake at night or has given them a data migration nightmare. In fact, the sheer magnitude of doing a legacy data migration has led many insurers to decide to leave legacy data alone, resulting in a myriad of work-arounds. This will most certainly lead to poor service for both customers and distributors. It can also lead to a great deal of added expense and employees who are frustrated by having to deal with work-arounds. A solid data strategy with BI tools and a modern data warehouse can make the migration of legacy data into the new systems significantly easier.
  • Business leaders are clamoring for analytics. Most of the technology demos we see at SMA address (or at least mention) analytics value in one way or another. However, without a data strategy, there may be a disconnect between the data architecture of the technology and the data structures decided on in a later data initiative. The result: delayed analytics value. Waiting for analytics can make business partners feel they are only getting incremental value from the new technology.
  • Many insurers have accelerated core modernization initiatives because of the pressing need for modern portals and expanded mobile capabilities. However, if customer and distributor data is still fragmented — not centralized in a modern data warehouse and not unified with a common data strategy — the full value of portals and mobile will not be attained. And no insurer can afford to fail at fully delivering in these areas.
  • Across a whole host of technology categories, software with out-of-the-box reporting tools is fairly common. On the surface, this seems to be an answer to a lot of problems. However, while technology-specific reporting tools have value, without an enterprise BI reporting tool an insurer can be creating reporting silos… and no insurer needs more silos. Additionally, while software-specific reporting tools may be useful for a specific category of data, such as operational data (which can be very good), they may not be what insurers need to gain deep insights into all categories of data.

There are a lot of scary things in the world today — besides Halloween — that we can’t control: terrorism, cybercrime and global warming, to mention a few. But all insurers can, and should, take steps to minimize the things that provoke fear. Electing to decide on an enterprise data strategy, business intelligence tools and modern data warehouses — and doing so first — is a way to mitigate other worrisome outcomes. Remember when deciding on an enterprise data strategy, BI tools and warehouses was the scary thing? Fortunately, technology has matured. And modern data management tools can be the key to dealing with the next wave of scary things.

data

To Go Big (Data), Try Starting Small

Just about every organization in every industry is rolling in data—and that means an abundance of opportunities to use that data to transform operations, improve performance and compete more effectively.

“Big data” has caught the attention of many—and perhaps nowhere more than in the healthcare industry, which has volumes of fragmented data ready to be converted into more efficient operations, bending of the “cost curve” and better clinical outcomes.

But, despite the big opportunities, for most healthcare organizations, big data thus far has been more of a big dilemma: What is it? And how exactly should we “do” it?

Not surprisingly, we’ve talked to many healthcare organizations that recognize a compelling opportunity, want to do something and have even budgeted accordingly. But they can’t seem to take the first step forward.

Why is it so hard to move forward?

First, most organizations lack a clear vision and direction around big data. There are several fundamental questions that healthcare firms must ask themselves, one being whether they consider data a core asset of the organization. If so, then what is the expected value of that asset, and how much will the company invest annually toward maintaining and refining that asset? Oftentimes, we see that, although the organization may believe that data is one of its core assets, in fact the company’s actions and investments do not support that theory. So first and foremost, an organization must decide whether it is a “data company.”

Second is the matter of getting everyone on the same page. Big data projects are complex efforts that require involvement from various parties across an organization. Data necessary for analysis resides in various systems owned and maintained by disparate operating divisions within the organization. Moreover, the data is often not in the form required to draw insight and take action. It has to be accessed and then “cleansed”—and that requires cooperation from different people from different departments. Likely, that requires them to do something that is not part of their day jobs—without seeing any tangible benefit from contributing to the project until much later. The “what’s in it for me” factor is practically nil for most such departments.

Finally, perception can also be an issue. Big data projects often are lumped in with business intelligence and data warehouse projects. Most organizations, and especially healthcare organizations, have seen at least one business intelligence and data warehouse project fail. People understand the inherent value but remain skeptical and un-invested to make such a transformational initiative successful. Hence, many are reticent to commit too deeply until it’s clear the organization is actually deriving tangible benefits from the data warehouse.

A more manageable approach

In our experience, healthcare organizations make more progress in tapping their data by starting with “small data“—that is, well-defined projects of a focused scope. Starting with a small scope and tackling a specific opportunity can be an effective way to generate quick results, demonstrate potential for an advanced analytics solution and win support for broader efforts down the road.

One area particularly ripe for opportunity is population health. In a perfect world with a perfect data warehouse, there are infinite disease conditions to identify, stratify and intervene for to improve clinical outcomes. But it might take years to build and shape that perfect data warehouse and find the right predictive solution for each disease condition and comorbidity. A small-data project could demonstrate tangible results—and do so quickly.

A small-data approach focuses on one condition—for example, behavioral health, an emerging area of concern and attention. Using a defined set of data, it allows you to study sources of cost and derive insights from which you can design and target a specific intervention for high-risk populations. Then, by measuring the return on the intervention program, you can demonstrate value of the small data solution; for example, savings of several million dollars over a one-year period. That, in turn, can help build a business case for taking action, possibly on a larger scale and gaining the support of other internal departments.

While this approach helps build internal credibility, which addresses one of the biggest roadblocks to big data, it does have some limitations. There is a risk that initiating multiple independent small-data projects can create “siloed” efforts with little consistency and potential for fueling the organization’s ultimate journey toward using big data. Such risks can be mitigated with intelligent and adaptive data architecture and a periodic evaluation of the portfolio of small-data solutions.

Building the “sandbox” for small-data projects

To get started, you need two things: 1) a potential opportunity to test and 2) tools and an environment that enable fast analysis and experimentation.

It is important to understand quickly whether a potential solution has a promising business case, so that you can move quickly to implement it—or move on to something else without wasting further investment.

If a business case exists, proceed to find a solution. Waiting to procure servers for analysis or for permission to use an existing data warehouse will cost valuable time and money. So that leaves two primary alternatives for supporting data analysis: leveraging Software-as-a-Service solutions such as Hadoop with in-house expertise, or partnering with an organization that provides a turnkey solution for establishing analytics capabilities within a couple of days.

You’ll then need a “sandbox” in which to “play” with those tools. The “sandbox” is an experimentation environment established outside of the organization’s production systems and operations that facilitate analysis of an opportunity and testing of potential intervention solutions. In addition to the analysis tools, it also requires resources with the skills and availability to interpret the analysis, design solutions (e.g., a behavioral health intervention targeted to a specific group), implement the solution and measure the results.

Then building solutions

For building a small-data initiative, it is a good idea to keep a running list of potential business opportunities that may be ripe for cost-reduction or other benefits. Continuing our population health example, this might include areas as simple as finding and intervening for conditions that lead to the common flu and reduced employee productivity, to preventing pre-diabetics from becoming diabetics, to behavioral health. In particular, look at areas where there is no competing intervention solution already in the marketplace and where you believe you can be a unique solution provider.

It is important to establish clear “success criteria” up front to guide quick “go” or “no-go” decisions about potential projects. These should not be specific to the particular small-data project opportunity but rather generic enough to apply across topics—as they become the principles guiding small data as a journey to broader analytics initiatives. Examples of success criteria might include:

– Cost-reduction goals
– Degree to which the initiative changes clinical outcomes
– Ease of access to data
– Ease of cleansing data so that it is in a form needed for analysis

For example, you might have easy access to data, but it requires a lot of effort to “clean” it for analysis—so it isn’t actually easy to use.

Another important criterion is presence of operational know-how for turning insight into action that will create outcomes. For example, if you don’t have behavioral health specialists who can call on high-risk patients and deliver the solution (or a partner that can provide those services), then there is little point in analyzing the issue to start with. There must be a high correlation between data, insight and application.

Finally, you will need to consider the effort required to maintain a specific small-data solution over time. For instance, a new predictive model to help identify high-risk behavioral health patients or high-risk pregnancies. Will that require a lot of rework each year to adjust the risk model as more data becomes available? If so, that affects the solution’s ease of use. Small-data solutions need to be dynamic and able to adjust easily to the market needs.

Just do it

Quick wins can accelerate progress toward realizing the benefits of big data. But realizing those quick wins requires the right focus—”small data”—and the right environment for making rapid decisions about when to move forward with a solution or when to abandon it and move on to something else. If in a month or two, you haven’t produced a solution that is translating into tangible benefits, it is time to get out and try something else.

A small-data approach requires some care and good governance, but it can be a much more effective way to make progress toward the end goal of leveraging big data for enterprise advantage.

This article first appeared at Becker’s Hospital Review.

7 Ways Your Data Can Hurt You

Your data could be your most valuable asset, and participants in the workers’ compensation industry have loads available because they have been collecting and storing data for decades. Yet few analyze data to improve processes and outcomes or to take action in a timely way.

Analytics (data analysis) is crucial to all businesses today to gain insights into product and service quality and business profitability, and to measure value contributed. But processes need to be examined regarding how data is collected, analyzed and reported. Begin by examining these seven ways data can hurt or help.

1. Data silos

Data silos are common in workers’ compensation. Individual data sets are used within organizations and by their vendors to document claim activity. Without interoperability (the ability of a system to work with other systems without special effort on the part of the user) or data integration, the silos naturally fragment the data, making it difficult to gain full understanding of the claim and its multiple issues. A comprehensive view of a claim includes all its associated data.

2. Unstructured data

Unstructured documentation, in the form of notes, leaves valuable information on the table. Notes sections of systems contain important information that cannot be readily integrated into the business intelligence. The cure is to incorporate data elements such as drop-down lists to describe events, facts and actions taken. Such data elements provide claim knowledge and can be monitored and measured.

3. Errors and omissions

Manual data entry is tedious work and often results in skipped data fields and erroneous content. When users are unsure of what should be entered into a data field, they might make up the input or simply skip the task. Management has a responsibility to hold data entry people accountable for what they add to the system. It matters.

Errors and omissions can also occur when data is extracted by an OCR methodology. Optical character recognition is the recognition of printed or written text characters by a computer. Interpretation should be reviewed regularly for accuracy and to be sure the entire scope of content is being retrieved and added to the data set. Changing business needs may result in new data requirements.

4. Human factors

Other human factors also affect data quality. One is intimidation by IT (information technology). Usually this is not intended, but remember that people in IT are not claims adjusters or case managers. The things of interest and concern to them can be completely different, and they use different language to describe those things.

People in business units often have difficulty describing to IT what they need or want. When IT says a request will be difficult or time-consuming, the best response is to persist.

5. Timeliness

There needs to be timely appropriate reporting of critical information found in current data. The data can often reveal important facts that can be reported automatically and acted upon quickly to minimize damage. Systems should be used to continually monitor the data and report, thereby gaining workflow efficiencies. Time is of the essence.

6. Data fraud

Fraud finds its way into workers’ compensation in many ways, even into its data. The most common data fraud is found in billing—overbilling, misrepresenting diagnoses to justify procedures and duplicate billing are a few of the methods. Bill review companies endeavor to uncover these hoaxes.

Another, less obvious means of fraud is through confusion. A provider may use multiple tax IDs or NPIs (national provider numbers) to obscure the fact that a whole set of bills are coming from the same individual or group. The system will consider the multiple identities as different and not capture the culprit. Providers can achieve the same result by using different names and addresses on bills. Analysis of provider performance is made difficult or impossible when the provider cannot be accurately identified.

7. Data as a work-in-process tool

Data can be used as a work-in-process tool for decision support, workflow analysis, quality measurement and cost assessment, among other initiatives. Timely, actionable information can be applied to work flow and to services to optimize quality performance and cost control.

Accurate and efficient claims data management is critical to quality, outcome and cost management. When data accuracy and integrity is overlooked as an important management responsibility, it will hurt the organization.

3 Ways to Boost Agency Productivity

In the not too distant past, consumers went to independent agents for all of their insurance needs – whether simple or complex – because insurance was often an elusive concept to the man on the street. At the same time, insurance coverage was considered something everyone must have, so when insurance-related questions came up, many consumers’ initial instinct was, “I have to talk to my agent.”

Over the past few years, this paradigm has shifted toward consumers being much more willing and able to build an understanding of their needs. This trend is broadly seen across nearly every industry and is accelerating in insurance. While the trusted relationship with an agent is often still crucial, insurance consumers today are researching, purchasing and interacting with the insurance industry in new ways, and increasingly on their own terms. In working with agencies and end consumers around the industry, we think the shifting behavior of consumers can be summarized in two key ways:

  • The Knowledgeable Consumer
    This consumer actively researches insurance online and consults his peer network prior to purchasing policies – either online or in person. How can you quickly and effectively service these consumers before they research other options or take their business elsewhere?
  • The Always-On Consumer
    This consumer wants information anytime, anywhere via any device, be it smartphone, tablet or desktop computer. These consumers don’t want to stop by your office for an auto ID card or certificate of insurance. How can you give them access to their insurance information when and where they want it?

One thing these two types of consumers have in common is the expectation for instant access to information. From an agent’s perspective, providing a mechanism for online service allows for an improved experience by allowing consumers the flexibility to interact with your agency when and how they want. And while there may still be a window of opportunity for this to be considered as a differentiator for the agency, the day is approaching where nearly every consumer will expect and demand it of the agency. Consumers who don’t get this immediate accessibility and flexibility will take their business elsewhere. Further, by pushing common transactions online, agencies can free resources to focus on higher-value service interactions with consumers.

As seen across nearly every industry, advanced technology should be a key element of the agency strategy to meet these business objectives and the evolving expectations of insurance consumers. Agencies and brokerages are able to become more productive with relative ease thanks to enhanced data, mobility, better communication and increased adoption of third-party apps and other tools.

As an agency considers its business strategy, I’ll suggest there are three key considerations when it comes to the role technology solutions can play:

  1. Standardize and Dissect Your Data
  • Standardized Workflows
    To the extent it makes sense for your business, workflow consistency can yield real productivity gains and help capture comprehensive and better customer risk and demographic information your agency can use to better market, account round and engage customers. By leveraging standardized workflows, agency owners are ensuring data entry is consistent across an agency – regardless of location. Additionally, standardized workflows reduce the number of workarounds conducted by staff – increasing productivity at the outset and reducing any potential time spent rectifying workarounds at the back-end. The result will be improved quality and completeness of the underlying data.
  • Business Intelligence
    Over time, agencies and brokerages generate an immense amount of data – yet it can be difficult to access, analyze and understand that data in meaningful ways. Business intelligence (BI) solutions are one way to help turn all of that data into information. For example, principals can identify which producers are using their time most efficiently and driving the most revenue for the business. Principals can also evaluate how effectively their business is cross-selling and quickly identify new market opportunities. While traditional reporting can take hours if not days, BI solutions present your information in immediate and visual ways that drive new insights, enabling you to make more effective decisions to improve productivity and business growth.
  1. Think Easy Access
  • Mobile Technology
    New mobile technology affords producers all of the benefits associated with management system access within an office, without having producers tethered to a desk. This allows them to be more productive and to respond to clients and prospects more quickly and in the manner that current and prospective customers want and expect. For smaller agencies, where employees wear multiple hats within the organization, giving your employees access to tools when they’re away from the office is critical.
  • Online Access
    Consider how your business can leverage the cloud to drive productivity gains. The ability for service staff to work from home via the cloud, when needed, supports work-life balance and allows business to go on regardless of unexpected events. 
  1. Time Is Money
  • Paper No More
    Evaluate ways to become an all-digital agency and eliminate paper. Agencies and brokerages should leverage electronic signature and delivery of client documents, which reduces the time and expense of mailing paper copies.
  • Carrier Information Exchange
    Productivity gains have increased over the years as carriers improved their interface and as agencies better understood how and where to enter data in carrier systems. The vast majority of agencies use personal lines policy detail download to reduce rekeying of data, saving, on average, 81 minutes a day per employee. In addition to download, using real-time for service and rating saves agency employees as much as an hour per day. Policy download yields daily time-savings of nearly an hour and a half per department employee for personal lines and nearly an hour for commercial lines. Take the time to automate communications with your carrier on the front end to save more time over the long term.
  • Online Client Self-Service
    As mentioned, today’s insurance consumer increasingly expects information anytime, anywhere. Agencies need to provide clients the ability to access policy and billing information on their terms, which helps strengthen relationships, ensures high retention rates and drives revenue gains. Self-service capability can increase staff productivity and decrease costs in commercial lines, as well as personal.

Technology will allow you to work faster and, in turn, will redefine the products and services you offer to your clients. While working faster is one thing, using technology to provide mobile access, enhanced communication and streamlined procedures to more quickly serve clients will also drive new business and customer retention.

For additional insights on how to use technology to bolster agency productivity, check out our eBook, “Working Smarter: Finding Agency Productivity Gains.”