Tag Archives: algorithm

Industry’s Biggest Data Blind Spot

For the past 10 years, the insurance industry has been handcuffed by the weather data that’s been available to it – primarily satellite and radar. Although important, these tools leave insurers with a blind spot because they lack visibility into what is happening on the ground. Because of these shortcomings, insurance companies are facing unprecedented litigation and increases in premiums. To solve the problem, we must first review the current situation as well as what solutions have been proposed to resolve this data blind spot.

Why Satellite and Radar Aren’t Enough

While satellites and radar tell us a lot about the weather and are needed to forecast broad patterns, they leave large blind spots when gathering information about exactly what is happening on the ground. Current solutions only estimate what’s happening in the clouds and then predict an expected zone of impact, which can be very different than the actual zone of impact. As many know from experience, it is common for storms to have pockets with more intense storm damage, known as hyper-local storms.

See also: Why Exactly Does Big Data Matter?  

The Rise of the Storm-Chasing Contractor

In recent years, the industry has also been beleaguered with a new obstacle: the storm-chasing contractor. These companies target areas that have been hit by a storm with ads on Craigslist and the like. They also exploit insurer’s blind spots by canvassing the area and making homeowners believe there was damage, regardless of whether damage actually occurred. This practice can leave the homeowner with hefty (and unnecessary) bills, hurt the entire industry and lead to higher litigation costs.

Attempts to Solve the Data Blind Spot

Many companies have proposed solutions that aim to solve the insurance industry’s data blind spot. Could a possible solution lie in building better algorithms using existing data? Realistically, if the only improvement made is to the current models or algorithms using existing data, there’s no real improvement because the data the algorithm is using still has gaps. Algorithms will continue to create a flawed output and will have no improved ability to create an actionable result. The answer must lie in a marked improvement in the foundational data.

If better data is required to solve this blind spot, one might think that a crowd-sourced data source would be the best alternative. On the surface, this solution may appear to be a good option because it collects millions of measurements that are otherwise unavailable. The reality is that big data is only relevant when you can build true value out of the entire data set and, while cell phones provide millions of measurements, the resulting cleaned data remains too inaccurate for crowd-sourced weather data to provide a reliable dataset.

The alternative crowd-sourced weather networks that use consumer weather stations to collect data also lead to huge problems in data quality. These weather stations lack any sort of placement control. They can be installed next to a tree, by air conditioning units or on the side of a house – all of which cause inaccurate readings that lead to more flawed output. And although these types of weather stations are able to collect data on rain and wind, none are able to collect data on hail – which causes millions of dollars in insurance claims each year.

The Case for an Empirical Weather Network

To resolve the insurance industry’s blind spot, the solution must contain highly accurate weather data that can be translated into actionable items. IoT has changed what is possible, and, with today’s technology, insurers should be able to know exactly where severe weather has occurred and the severity of damage at any given location. The answer lies in establishing a more cost-effective weather station, one that is controlled and not crowd-sourced. By establishing an extensive network of weather stations with controlled environments, the data accuracy can be improved tremendously. With improved data accuracy, algorithms can be reviewed and enhanced so insurers can garner actionable data to improve their storm response and recovery strategies.

Creating an extensive network of controlled weather stations is a major step toward fixing the insurance industry’s data blind spot, but there is one additional piece of data that is required. It is imperative that these weather stations measure everything, including one of the most problematic and costly weather events – hail. Without gathering hail data, the data gathered by the controlled weather stations would still be incomplete. No algorithm can make up for missing this entire category of data.

See also: 4 Benefits From Data Centralization

While technology has improved tremendously over the past 10 years, many insurers continue to use traditional data that has always been available them. Now is the time for insurers to embrace a new standard for weather data to gain insights that eliminate their blind spot, improve their business and provide better customer experiences.

Understory has deployed micro-networks of weather stations that produce the deep insights and accuracy that insurers need to be competitive today. Understory’s data tracks everything from rain to wind to temperature and even hail. Our weather stations go well beyond tracking the size of the hail; they also factor in the hail momentum, impact angle and size distribution over a roof. This data powers actionable insights

AI: Everywhere and Nowhere (Part 2)

This is part two of a three-part series. Part 1 can be found here

As we saw in a previous blog post on AI Everywhere and Nowhere, defining artificial intelligence is like trying to hit a disappearing target. As soon as any aspect of AI gains widespread adoption, people fail to distinguish it as an AI technology, and it dissolves into the sea of general technology. As a result, most detractors of AI, at least until recently, have questioned the real-world applications of AI. In turn, AI never gains the respect and recognition it needs to evolve and reach its full potential. The beauty (and bane) of AI is that it is everywhere and yet nowhere – it is becoming ubiquitous in all of our interactions (at least all of our virtual interactions), yet most people fail to recognize and respect it.

Artificial Intelligence Is Ubiquitous Intelligence

You wake up in the morning and from your bed ask your digital assistant, “What is the weather like today?” It replies, “We have 80% chance of snow in Lexington later in the evening – with accumulations of one to three inches.” The voice recognition, the natural language understanding of our question, the search through the Internet to get the right answer and the translation of that answer into speech is all AI.

You get into your office and open your email. Your email gets automatically sorted into “Social,” “Forums,” “Private” or whatever categories you have created, gets identified as important or not or marked with whatever tags you have provided to make it easier for you to read and clear your email. The classification of your email based on the To, From, Subject and Content fields, the natural language processing to extract the right keywords, the machine learning to determine what is spam or not spam or who is important or not is all AI.

You open up your online newspaper to check on the stock market performance from yesterday. You get a description of the overall stock market performance and the movement of your favorite stocks. The news is personalized to the topics, sources and authors that you want to read, and the newspaper has recommendations on what is trending among the sources or people you follow. The natural language generation based on structured stock market performance data, the curation of articles based on personal preferences and the recommendation engine for suggested articles are all AI.

You open up your favorite search engine, and, as you type your query in the search box, the system suggests possible completions. Then, the system recommends the right websites from billions of documents on the Internet and the right ad that matches your query, and fulfills the best bid for your search term among competing advertisers who want to personalize their message to you. The statistical inference in suggesting completions, the page rank algorithm that computes the relevant pages to display and the selection of the right ad using a real-time ad exchange are all AI.

See also: How to Think About the Rise of the Machines

The list goes on and on. In fact, there is very little in our day-to-day life that is not affected by AI in some way. Yet the real power of AI is the insight that it provides us, without our being aware of it. The intelligence hidden behind many of our day-to-day interactions is powered by an AI algorithm related to machine learning, natural language processing or more generally unstructured data processing, intelligent search, intelligent agents and robotics. And, while AI is ubiquitous, we have only scratched the surface regarding what it can mean for us.

How Algorithms Will Transform Insurance

I can’t stop thinking about algorithms. I am obsessed, and I want to tell you why.

Let’s be clear: I am not a data scientist. I am a guy who finds technology and applications of technology fascinating. I am not writing this for technology nerds. I am writing this for professionals who want a working knowledge of technology.

If you are reading this, then you understand computers. A computer is nothing more than rules programmed by a human. Those rules are then executed and create an output.

But algorithms are so much more; they are breathtaking. An algorithm is a computer writing its own rules and then creating output from those rules.

It’s easy to focus on the scary part of algorithms. In the Avengers movie, a super algorithm results in a machine – Ultron – bent on destroying the world. I will leave those scenarios to the Elon Musks of the world.

Algorithms can do so much good

Think about any repetitive task you do. An algorithm can be created to solve that task. Some algorithms are used for fun. For example, Facebook uses algorithms to suggest friends for you to connect with. Google Photos uses an algorithm to identify faces and group pictures of the same person together (which can lead to terrible results).

Algorithms are already being used in the insurance industry. Take a look at CoverHound or PolicyGenius; the algorithms behind these applications quote personal lines of insurance based on your needs.

How algorithms work (and why they are awesome)

Again, I am not a data scientist, but here is my simple explanation of how most if not all algorithms are created:

1. Create a seed set.

First, you identify a seed set, which is the core learning that is taught to the algorithm. Yes, that’s right, even a computer algorithm has to be taught something from a human! For example, with the Facebook algorithm, I’m almost certain that the algorithm was first fed a giant spreadsheet that contained information about individuals and how they were connected (you do know your data created Facebook, Google and every other big data company you can think of, right?).

2. Feed the seed set to the algorithm.

The algorithm then reads all of the information it is fed and starts making its own rules. For example, the Facebook algorithm may determine: “Oh, I see, Jimmy likes Teenage Mutant Ninja Turtles, and he is connected with Bobby from the same city, and Bobby also likes Teenage Mutant Ninja Turtles. I bet Jimmy also knows Steve from the same city who also has a love for Donatello. They should connect.”

3. A human reviews the results.

A human (see, you are still needed!) then reviews the output of the application of the algorithm rules. In the Facebook example, a human might determine if Jimmy and Steve should actually connect on Facebook. Maybe they are part of rival gangs, and the algorithm didn’t recognize this. The human would then add this data to the spreadsheet and feed it back to the algorithm.

4. The algorithm rules are improved based on new input.

The algorithm creates rules to account for the new information. “Don’t connect rival gang members even if they live in the same city and like the Teenage Mutant Ninja Turtles.”

5. Steps three and four continue indefinitely.

Now stop for a second and think about all the rules that are built up in your head about people you connect with. Maybe you prefer to hang out with people who brew beer or read Harry Potter. There are literally hundreds of millions of personal preferences that human beings use to associate with people.

What if you could store all of those preferences and use them to connect people?

That’s Facebook.

Algorithms are good for insurance workers

Now think about your work and all the stuff you know and all of the stuff your colleagues know. What if all of that information could be fed into an algorithm and used to create rules. You could then use those rules to more quickly do your work.

I can hear you thinking “But then I will be out of a job.” Therein lies the rub, one that has been discussed ad nauseum (more than 9 million results for a Google search on “technology will destroy jobs”). Fatalists argue that algorithms and the advanced software programs they create will destroy jobs. Famous technologist and investor Marc Andreessen expressed as much when he proclaimed in 2011 that “software is eating the world.”

But what happens if software starts doing repetitive tasks previously done by humans? I believe humans find new ways to be productive. And, I believe history supports my theory. But that’s a blog post for another day.

I will leave you with two questions.

What repetitive tasks do you despise?

Wouldn’t it be great if you could offload these tasks to a computer?

How Machine Learning Changes the Game

Insurance executives can be excused for having ignored the potential of machine learning until today. Truth be told, the idea almost seems like something out of a 1980s sci-fi movie: Computers learn from mankind’s mistakes and adapt to become smarter, more efficient and more predictable than their human creators.

But this is no Isaac Asimov yarn; machine learning is a reality. And many organizations around the world are already taking full advantage of their machines to create new business models, reduce risk, dramatically improve efficiency and drive new competitive advantages. The big question is why insurers have been so slow to start collaborating with the machines.

Smart machines

Essentially, machine learning refers to a set of algorithms that use historical data to predict outcomes. Most of us use machine learning processes every day. Spam filters, for example, use historical data to decide whether emails should be delivered or quarantined. Banks use machine learning algorithms to monitor for fraud or irregular activity on credit cards. Netflix uses machine learning to serve recommendations to users based on their viewing history and recommendations.

In fact, organizations and academics have been working away at defining, designing and improving machine learning models and approaches for decades. The concept was originally floated back in the 1950s, but – with no access to digitized historical data and few commercial applications immediately evident – much of the development of machine learning was largely left to academics and technology geeks. For decades, few business leaders gave the idea much thought.

Machine learning brings with it a whole new vocabulary. Terms such as “feature engineering,” “dimensionality reduction,” “supervised and unsupervised learning,” to name a few. As with all new movements, an organization must be able to bridge the two worlds of data science and business to generate value.

Driven by data

Much has changed. Today, machine learning has become a hot topic in many business sectors, fueled, in large part, by the increasing availability of data and low-cost, scalable, cloud computing. For the past decade or so, businesses and organizations have been feverishly digitizing their data and records – building mountains of historical data on customers, transactions, products and channels. And now they are setting their minds toward putting it to good use.

The emergence of big data has also done much to propel machine learning up the business agenda. Indeed, the availability of masses of unstructured data – everything from weather readings through to social media posts – has not only provided new data for organizations to comb through, it has also allowed businesses to start asking different questions from different data sets to achieve differentiated insights.

The continuing drive for operational efficiency and improved cost management has also catalyzed renewed interest in machine learning. Organizations of all stripes are looking for opportunities to be more productive, more innovative and more efficient than their competitors. Many now wonder whether machine learning can do for information-intensive industries what automation did for manual-intensive ones.

Graphic_page_02_1024x512px

A new playing field

For the insurance sector, we see machine learning as a game-changer. The reality is that most insurance organizations today are focused on three main objectives: improving compliance, improving cost structures and improving competitiveness. It is not difficult to envision how machine learning will form (at least part of) the answer to all three.

Improving compliance: Today’s machine learning algorithms, techniques and technologies can be used on much more than just hard data like facts and figures. They can also be used to  analyze information in pictures, videos and voice conversations. Insurers could, for example, use machine learning algorithms to better monitor and understand interactions between customers and sales agents to improve their controls over the mis-selling of products.

Improving cost structures: With a significant portion of an insurer’s cost structure devoted to human resources, any shift toward automation should deliver significant cost savings. Our experience working with insurers suggests that – by using machines instead of humans – insurers could cut their claims processing time down from a number of months to a matter of minutes. What is more, machine learning is often more accurate than humans, meaning that insurers could also cut down the number of denials that result in appeals they may ultimately need to pay out.

Improving competitiveness: While reduced cost structures and improved efficiency can certainly lead to competitive advantage, there are many other ways that machine learning can give insurers the competitive edge. Many insurance customers, for example, may be willing to pay a premium for a product that guarantees frictionless claim payout without the hassle of having to make a call to the claims team. Others may find that they can enhance customer loyalty by simplifying re-enrollment processes and client on-boarding processes to just a handful of questions.

Overcoming cultural differences

It is surprising, therefore, that insurers are only now recognizing the value of machine learning. Insurance organizations are founded on data, and most have already digitized existing records. Insurance is also a resource-intensive business; legions of claims processors, adjustors and assessors are required to pore over the thousands – sometimes millions – of claims submitted in the course of a year. One would therefore expect the insurance sector to be leading the charge toward machine learning. But it is not.

One of the biggest reasons insurers have been slow to adopt machine learning clearly comes down to culture. Generally speaking, the insurance sector is not widely viewed as being “early adopters” of technologies and approaches, preferring instead to wait until technologies have become mature through adoption in other sectors. However, with everyone from governments through to bankers now using machine learning algorithms, this challenge is quickly falling away.

The risk-averse culture of most insurers also dampens the organization’s willingness to experiment and – if necessary – fail in its quest to uncover new approaches. The challenge is that machine learning is all about experimentation and learning from failure; sometimes organizations need to test dozens of algorithms before they find the most suitable one for their purposes. Until “controlled failure” is no longer seen as a career-limiting move, insurance organizations will shy away from testing new approaches.

Insurance organizations also suffer from a cultural challenge common in information-intensive sectors: data hoarding. Indeed, until recently, common wisdom within the business world suggested that those who held the information also held the power. Today, many organizations are starting to realize that it is actually those who share the information who have the most power. As a result, many organizations are now keenly focused on moving toward a “data-driven” culture that rewards information sharing and collaboration and discourages hoarding.

Starting small and growing up

The first thing insurers should realize is that this is not an arms race. The winners will probably not be the organizations with the most data, nor will they likely be the ones that spent the most money on technology. Rather, they will be the ones that took a measured and scientific approach to building their machine learning capabilities and capacities and – over time – found new ways to incorporate machine learning into ever-more aspects of their business.

Insurers may want to embrace the idea of starting small. Our experience and research suggest that – given the cultural and risk challenges facing the insurance sector – insurers will want to start by developing a “proof of concept” model that can safely be tested and adapted in a risk-free environment. Not only will this allow the organization time to improve and test its algorithms, it will also help the designers to better understand exactly what data is required to generate the desired outcome.

More importantly, perhaps, starting with pilots and “proof of concepts” will also provide management and staff with the time they need to get comfortable with the idea of sharing their work with machines. It will take executive-level support and sponsorship as well as keen focus on key change management requirements.

Take the next steps

Recognizing that machines excel at routine tasks and that algorithms learn over time, insurers will want to focus their early “proof of concept” efforts on those processes or assessments that are widely understood and add low value. The more decisions the machine makes and the more data it analyzes, the more prepared it will be to take on more complex tasks and decisions.

Only once the proof of concept has been thoroughly tested and potential applications are understood should business leaders start to think about developing the business case for industrialization (which, to succeed in the long term, must include appropriate frameworks for the governance, monitoring and management of the system).

While this may – on the surface – seem like just another IT implementation plan, the reality is that it machine learning should be championed not by IT but rather by the business itself. It is the business that must decide how and where machines will deliver the most value, and it is the business that owns the data and processes that machines will take over. Ultimately, the business must also be the one that champions machine learning.

All hail, machines!          

At KPMG, we have worked with a number of insurers to develop their “proof of concept” machine learning strategies over the past year, and we can say with absolute certainty that the Battle of Machines in the insurance sector has already started. The only other certainty is that those that remain on the sidelines will likely suffer the most as their competitors find new ways to harness machines to drive increasing levels of efficiency and value.

The bottom line is that the machines have arrived. Insurance executives should be welcoming them with open arms.