Tag Archives: machine learning

Leveraging the Power of Data Insights

The vast majority of insurance companies lack the infrastructure to mobilize around a true prescriptive analytics capability, and small- and medium-sized insurers are especially at risk, in terms of leveraging data insights into a competitive advantage. Small- and medium-sized insurers are constrained by the following key resource categories:

    • Access and ability to manage experienced data scientists
    • Ability to acquire or develop data visualization, machine learning and artificial intelligence capability
    • Experience and staff to manage extensive and complex data partnerships
    • Access to modern core insurance systems and data and analytics technology to leverage product innovation insights and new customer interactions

Changing customer behaviors, non-traditional competition and internal operational constraints are putting many traditional insurance companies—especially the smaller ones—at risk from a retention and growth perspective. The marketplace drivers create several pain points or constraints for small and medium size insurers, such as can be seen in the following graphic:

Screen Shot 2016-02-15 at 2.53.12 PM
This is excerpted from a research report from Majesco. To read the full report, click here.

Digital Insurance, Anyone?

The digital banking conversation is alive and kicking within the FinTech world, focused on discussing the merits, definitions and initiatives around what it means for a bank to become digital across its entire technology and business stacks. I have yet to find the same level of discourse and vibrancy within the insurance world.

Spurred by Yan Ranchere’s latest blog post, I am adding my own thoughts to the insurance narrative or, dare I coin it, the “digital insurance” narrative.

First, let’s frame the discussion by attempting to define the evolution of the insurance model from old to current and future or digital:

Old Insurance Model:  This model is mostly paper-based with an application collected from the customer by the agent and sent to the carrier. The agent quote is not binding and may indeed change once the carrier has reviewed the application. I would qualify this model as carrier-centric. The carrier does all the heavy lifting with data verification and underwriting, with little stimuli from external data feeds in real time; the agent merely serves as a conduit.  As result, underwriting and closing a policy may take several days or even several weeks.

Claims management and customer service are cumbersome. Arguably, this delivers poor service in today’s age of instantaneous expectations. Not only can the old model be considered carrier-centric, I would also venture it is product-centric (in the same way that the old banking model is product-centric). The implications from a technology point of view are the same as in the banking world: a thin front end, shaky middleware and a back end that is silo-driven and that makes it difficult to optimize underwriting or claims.

Current Insurance Model:  The current model optimized the old model and made the transition from carrier-centric to agent-centric, which means that things are less paper-based and more electronic and that there is more process pushed onto the agent to be closer to the customer. In this model, the agent is empowered to issue policies under certain limits and risk frameworks—the carrier is not the gating factor and central node anymore.

Instead of batch-processing policies at the carrier level, the system has moved to exception processing at the carrier level (when concerned with nonstandard data and policies), thereby leveraging the agent. The result is faster quotes and policies signed more quickly, with the time going from days and weeks to hours or just a day. Customer service will go the same route. Claims management will still remain the central concern of the carrier, though.

Digital Insurance Model:  This is the way of the future. It is neither carrier- nor agent-centric, and it certainly is not product-centric any more. This model is truly customer- and data-centric—very similar to what we witness in digital banking. The carrier reaches out to the customer in an omni-channel way. Third-party data sources are readily available, and the technology to process and digest the data is extremely effective and delivers fast and furiously. Machine learning allows for near-instantaneous underwriting at a carrier or agent level, any time, anywhere. The customer can now get a policy in minutes.

Processes after policy-signing follow a similar transformative route. The technology implications are material: new core systems of record, less silo effect, more integration, massive investments in data warehouses and in products and services that act as layers of connection between data repository centers, core systems, claims management platforms, underwriting platforms and omni-channel platforms.

Picture the carrier effectively plugged in to the external world via data sources, plugged in to the customer in myriad ways that were not possible in the past and plugged in to third-party providers, all of this in real (or near-real) time. That means no more of the old linear prosecution of the main insurance processes: customer acquisition, underwriting, claims management. Furthermore, with a fast-changing world and more complex customer needs, delivering a product is not the winning formula any more. Understanding the customer via data in a contextual manner is.

To be fair, insurance carriers have nearly completed massive upgrades to their database architecture and can claim the latest in data warehouse technology. Some carriers have gone the path of renovating their channels and going all-out digital. Others are refining the ways they engage new customers. Most are thinking of going mobile. Still, much remains to be done. These are exciting times.

Boiling down what a digital insurance model means, we can easily see the similarities with digital banking; digital insurance must be transparent, fast, ubiquitous and data-focused, and there must be an understanding that the customer is key and is not a product.

Once you digest this new model, it is easier to sift through the key trends that are reshaping and will reshape the industry. I am listing a few that we followed at R66.  By no means is this an exhaustive list, nor is it ordered by priority, impact or size of opportunity:

1) Distribution channel disruption: There are three sub trends here—a) the consolidation of brokers and agents, b) channels going all-out-digital and disrupting the brick and mortar and c) carriers continuing to go direct and competing with brokers.

2) Insuring the sharing/renting economy: Think about Uber, Airbnb and the many other start-ups that are building the sharing economy. All of them need to or already are creating different types of coverage through their ecosystems. Carriers that focus on the specific risks, navigate the use cases, gather the right data and are forward-thinking will win big. James River is an insurance carrier that comes to mind in this space.

3) Connected data analysis: I do not use the term “big data” any more. Real-time connected data analysis is the right focus. Think of the integration of a series of hardware devices, or think of n+1 data sources. These are powerful, mind-blowing and will affect the trifecta of insurance profits: underwriting, claims management and customer acquisition.

4) Technology stack upgrades:  This means middleware to complement data warehouse investments, new systems of record, software platforms for underwriting (or claims management) and API galore. It’s the same story with banking; there is just a different insurance flavor.

5) Technology externalities: GPS, telematics, AI, machine learning, drones, IoT, wearables, smart sensors, visualization and next-generation risk analysis tools—you name it, these will help insurance companies get better at what they do, if they adopt and understand.

6) Mobile delivery:  How could I not list mobile delivery? Whether it is to improve customer acquisition; policies or claims management; or customer service, we are going mobile, baby.

7) A la carte coverage: Younger generations are approaching ownership in different ways. As a result, a one-size-fits-all insurance policy will not work any more. We are already witnessing a la carte insurance based on car usage, homes or commercial real estate connected via sensors or IoT.

8) Speciality insurance products:  We live in a digital world, baby, which means cyber security, fraud and identity theft.

It should be noted that the above describes changes in the P&C industry and that the terms “carriers” and “reinsurers” can be used interchangeably. Furthermore, I have not focused on health insurance—I know next to nothing in that field.

Any insurance expert is welcome to reach out and educate me. Anyone as clueless as I am is welcome to add their thoughts, too!

This article first appeared on Pascal Bouvier’s blog, here.

How Machine Learning Changes the Game

Insurance executives can be excused for having ignored the potential of machine learning until today. Truth be told, the idea almost seems like something out of a 1980s sci-fi movie: Computers learn from mankind’s mistakes and adapt to become smarter, more efficient and more predictable than their human creators.

But this is no Isaac Asimov yarn; machine learning is a reality. And many organizations around the world are already taking full advantage of their machines to create new business models, reduce risk, dramatically improve efficiency and drive new competitive advantages. The big question is why insurers have been so slow to start collaborating with the machines.

Smart machines

Essentially, machine learning refers to a set of algorithms that use historical data to predict outcomes. Most of us use machine learning processes every day. Spam filters, for example, use historical data to decide whether emails should be delivered or quarantined. Banks use machine learning algorithms to monitor for fraud or irregular activity on credit cards. Netflix uses machine learning to serve recommendations to users based on their viewing history and recommendations.

In fact, organizations and academics have been working away at defining, designing and improving machine learning models and approaches for decades. The concept was originally floated back in the 1950s, but – with no access to digitized historical data and few commercial applications immediately evident – much of the development of machine learning was largely left to academics and technology geeks. For decades, few business leaders gave the idea much thought.

Machine learning brings with it a whole new vocabulary. Terms such as “feature engineering,” “dimensionality reduction,” “supervised and unsupervised learning,” to name a few. As with all new movements, an organization must be able to bridge the two worlds of data science and business to generate value.

Driven by data

Much has changed. Today, machine learning has become a hot topic in many business sectors, fueled, in large part, by the increasing availability of data and low-cost, scalable, cloud computing. For the past decade or so, businesses and organizations have been feverishly digitizing their data and records – building mountains of historical data on customers, transactions, products and channels. And now they are setting their minds toward putting it to good use.

The emergence of big data has also done much to propel machine learning up the business agenda. Indeed, the availability of masses of unstructured data – everything from weather readings through to social media posts – has not only provided new data for organizations to comb through, it has also allowed businesses to start asking different questions from different data sets to achieve differentiated insights.

The continuing drive for operational efficiency and improved cost management has also catalyzed renewed interest in machine learning. Organizations of all stripes are looking for opportunities to be more productive, more innovative and more efficient than their competitors. Many now wonder whether machine learning can do for information-intensive industries what automation did for manual-intensive ones.

Graphic_page_02_1024x512px

A new playing field

For the insurance sector, we see machine learning as a game-changer. The reality is that most insurance organizations today are focused on three main objectives: improving compliance, improving cost structures and improving competitiveness. It is not difficult to envision how machine learning will form (at least part of) the answer to all three.

Improving compliance: Today’s machine learning algorithms, techniques and technologies can be used on much more than just hard data like facts and figures. They can also be used to  analyze information in pictures, videos and voice conversations. Insurers could, for example, use machine learning algorithms to better monitor and understand interactions between customers and sales agents to improve their controls over the mis-selling of products.

Improving cost structures: With a significant portion of an insurer’s cost structure devoted to human resources, any shift toward automation should deliver significant cost savings. Our experience working with insurers suggests that – by using machines instead of humans – insurers could cut their claims processing time down from a number of months to a matter of minutes. What is more, machine learning is often more accurate than humans, meaning that insurers could also cut down the number of denials that result in appeals they may ultimately need to pay out.

Improving competitiveness: While reduced cost structures and improved efficiency can certainly lead to competitive advantage, there are many other ways that machine learning can give insurers the competitive edge. Many insurance customers, for example, may be willing to pay a premium for a product that guarantees frictionless claim payout without the hassle of having to make a call to the claims team. Others may find that they can enhance customer loyalty by simplifying re-enrollment processes and client on-boarding processes to just a handful of questions.

Overcoming cultural differences

It is surprising, therefore, that insurers are only now recognizing the value of machine learning. Insurance organizations are founded on data, and most have already digitized existing records. Insurance is also a resource-intensive business; legions of claims processors, adjustors and assessors are required to pore over the thousands – sometimes millions – of claims submitted in the course of a year. One would therefore expect the insurance sector to be leading the charge toward machine learning. But it is not.

One of the biggest reasons insurers have been slow to adopt machine learning clearly comes down to culture. Generally speaking, the insurance sector is not widely viewed as being “early adopters” of technologies and approaches, preferring instead to wait until technologies have become mature through adoption in other sectors. However, with everyone from governments through to bankers now using machine learning algorithms, this challenge is quickly falling away.

The risk-averse culture of most insurers also dampens the organization’s willingness to experiment and – if necessary – fail in its quest to uncover new approaches. The challenge is that machine learning is all about experimentation and learning from failure; sometimes organizations need to test dozens of algorithms before they find the most suitable one for their purposes. Until “controlled failure” is no longer seen as a career-limiting move, insurance organizations will shy away from testing new approaches.

Insurance organizations also suffer from a cultural challenge common in information-intensive sectors: data hoarding. Indeed, until recently, common wisdom within the business world suggested that those who held the information also held the power. Today, many organizations are starting to realize that it is actually those who share the information who have the most power. As a result, many organizations are now keenly focused on moving toward a “data-driven” culture that rewards information sharing and collaboration and discourages hoarding.

Starting small and growing up

The first thing insurers should realize is that this is not an arms race. The winners will probably not be the organizations with the most data, nor will they likely be the ones that spent the most money on technology. Rather, they will be the ones that took a measured and scientific approach to building their machine learning capabilities and capacities and – over time – found new ways to incorporate machine learning into ever-more aspects of their business.

Insurers may want to embrace the idea of starting small. Our experience and research suggest that – given the cultural and risk challenges facing the insurance sector – insurers will want to start by developing a “proof of concept” model that can safely be tested and adapted in a risk-free environment. Not only will this allow the organization time to improve and test its algorithms, it will also help the designers to better understand exactly what data is required to generate the desired outcome.

More importantly, perhaps, starting with pilots and “proof of concepts” will also provide management and staff with the time they need to get comfortable with the idea of sharing their work with machines. It will take executive-level support and sponsorship as well as keen focus on key change management requirements.

Take the next steps

Recognizing that machines excel at routine tasks and that algorithms learn over time, insurers will want to focus their early “proof of concept” efforts on those processes or assessments that are widely understood and add low value. The more decisions the machine makes and the more data it analyzes, the more prepared it will be to take on more complex tasks and decisions.

Only once the proof of concept has been thoroughly tested and potential applications are understood should business leaders start to think about developing the business case for industrialization (which, to succeed in the long term, must include appropriate frameworks for the governance, monitoring and management of the system).

While this may – on the surface – seem like just another IT implementation plan, the reality is that it machine learning should be championed not by IT but rather by the business itself. It is the business that must decide how and where machines will deliver the most value, and it is the business that owns the data and processes that machines will take over. Ultimately, the business must also be the one that champions machine learning.

All hail, machines!          

At KPMG, we have worked with a number of insurers to develop their “proof of concept” machine learning strategies over the past year, and we can say with absolute certainty that the Battle of Machines in the insurance sector has already started. The only other certainty is that those that remain on the sidelines will likely suffer the most as their competitors find new ways to harness machines to drive increasing levels of efficiency and value.

The bottom line is that the machines have arrived. Insurance executives should be welcoming them with open arms.

6 Opportunities for Carriers in ‘Big Data’

As insurers increasingly collect “big data” — think petabytes and exabytes — it’s now possible to use new data tools and technologies to mine data across three dimensions:

  • Large size/long duration — Traditional data mining usually was limited to three to five years of data. Now you can mine data accumulated over decades.
  • Real-time — With the advent of social media and the different sources, data pours in at ever-increasing speeds.
  • Variety of types — There’s more variety of data, both structured and unstructured, that are drastically different from each other.

The ability to master the complexities of capturing, processing and organizing big data has led to several data-centric opportunities for carriers.

Personalized marketing

Big data is playing an increasing role in sales and marketing, and personalization is the hot industry trend. Gathering more information about customers helps insurance companies provide more-personalized products and services. Innovative companies are coming up with new ways to gather more information about customers to personalize their buying experience.

One example is Progressive’s Snapshot device, which tracks how often insureds slam on the brakes and how many miles they drive. It lets insurers provide personalized products based on customers’ driving habits. A device like Snapshot captures information from the car every second, collecting data like how often drivers brake, how quickly they accelerate, driving time, average speed, etc. According to facethefactsusa.org, U.S. drivers log an average of 13,476 miles per year, or 37 miles a day. Big data systems have to process this constant stream of data, coming in every second for however long the user takes to travel 37 miles. Even if only 10% to 15% of customers use the device, it is still a huge amount of data to process. The systems have to process all this information and use predictive models to analyze risks and offer a personalized rate to the user.

People are increasingly using social media to voice their interests, opinions and frustrations, so analyzing social feeds can also help insurance companies better target new customers and respond to existing customers. Using big data, insurers can pinpoint trends, especially of complaints or dissatisfaction with current products and services. Getting ahead of the curve is crucial because bad reviews can spread like wildfire on the web.

Risk management 

The wealth of data now available to insurance companies — from both old and new data sources — offers ways to better predict risks and trends. Big data can be used to analyze decades of information and identify trends and newer dimensions like demographic change and behavioral evolution.

Process improvement and organizational efficiency

Another popular use is for constant improvement of organizational productivity by recording usage patterns of an organization’s internal tools and software. Better understanding of usage trends leads to:

  • Creation of more useful software that better fits the organization’s needs.
  • Avoidance of tools that do not have a good return on investment.
  • Identification of manual tasks that can be automated. For example, logs and usage patterns from tools at the agent’s office are important sources of information for understanding customer preferences and agency efficiency.

Automation of manual processes results in significant savings. But in huge, complex organizations, there are almost always overlapping or multiple instances of similar systems and processes that result in redundancy and increased cost of maintenance. Similarly, inadequate and inefficient systems require manual intervention, resulting in bottlenecks, inflated completion times and, most importantly, increased cost.

Using data from internal systems, systems can study critical usage information of various tools and analyze productivity, throughput and turnaround times across a variety of parameters. This can help managers understand inadequacies of existing systems and identify redundancy.

The same data sources are also used to predict higher and leaner load times, so the infrastructure teams can plan for providing appropriate computing resources during critical events. These measures add up quickly, resulting in significant cost savings and improved office efficiency.

Automated learning

While big data technologies now help perform regular data-mining on a much bigger scale, that’s only the beginning. Technology companies are venturing into the fuzzy world of decision-making via artificial intelligence, and a branch of AI called machine learning has greatly advanced.

Machine learning deals with making computer systems learn constantly from data to progressively make more intelligent decisions. Once a machine-learning system has been trained to use specific pattern-analyzing models, it starts to learn from the data and works to identify trends and patterns that have led to specific decisions in the past. Naturally, when more data — along all of the big data axes — is provided, the system has a much better chance to learn more, make smarter decisions and avoid the need for manual intervention.

The insurance and financial industries pioneered the commercial application of machine learning techniques by creating computational models for risk analysis and premium calculation.  They can predict risks and understand the creditworthiness of a customer by analyzing their past data.

While traditional systems dealt with tens of thousands of data records and took days to crunch through a handful of parameters to analyze risks using, for example, a modified Gaussian copula, the same is now possible in a matter of hours, with two major improvements. First, all available data can be analyzed, and second, risk parameters are unlimited.

Predictive analytics

Machine language technology can use traditional and new data streams to analyze trends and help build models that predict patterns and events with increased accuracy and convert these predictions into opportunities.

Traditional systems generally helped identify reasons for consistent patterns. For example, when analysis of decades of data exposes a consistent trend like an increase in accident reporting during specific periods of the year, results indicated climatic or social causes such as holidays.

With big data and machine learning, predictive analytics now helps create predictions for claims reporting volumes and trends, medical diagnosis for the health insurance industry, new business opportunities and much more.

Fraud Detection

The insurance industry has always been working to devise new ways to detect fraud. With big data technology, it is now possible to look for fraud detection patterns across multiple aspects of the business, including claims, payments and provider-shopping and detect them fairly quickly.

Machine learning systems can now identify new models and patterns of fraud that previously required manual detection. Fraud detection algorithms have improved tremendously with the power of machine learning. Consequently, near-real-time detection and alerting is now possible with big data. This trend promises to only keep getting better.

These six opportunities are just the tip of the iceberg. The entire insurance industry can achieve precise and targeted marketing of products based on history, preferences and social data from customers and competitors. No piece of data, regardless of form, source or size, is insignificant. With big data technology and machine learning tools and algorithms, combined with the limitless power of the cloud computing platform, possibilities are endless.