Tag Archives: ai

The Key to ‘Augmented Intelligence’

As the insurance industry undergoes a massive digital disruption, it creates a sense of urgency and forces insurers to face risks and challenges, including the increasingly complex nature of processes and operations, the rapid evolution of technology and an increase in fraud. Concurrently, the data sets collected by insurers have practically exploded in terms of volume, speed, format, accuracy—and the value they can bring to those companies that know how to harvest it. 

Given the exponential pace of change, insurance leaders need to understand the implications of these trends, especially from a data and AI perspective, and consider carefully how they should respond. Augmented intelligence is changing the paradigm, helping insurance companies evolve processes, cut costs and improve customer experience with faster insights. 

The Age of ‘Augmented Insurance’

To keep pace with the disruptions, insurance organizations keep evolving their distribution strategies, explore new partnerships, alter their products and transform how they use technology to deliver upon their strategy–all based on data and analytics insights. Many insurance companies already use predictive analytics to anticipate possible future customer behavior (including risk of cancellation), identify fraud risks, triage claims, anticipate trends and predict prices. But all this has required significant investment in sophisticated tools, technologies, infrastructure and–most importantly–people. Fully automated processes may work to speed up operational activities, but strategic thinking has required insights that are curated, contextual and trustworthy. Augmented intelligence breaks this dependency on manual intervention for curating deep, advanced and contextual insights.

The principle behind augmented intelligence is to act as a force multiplier to human intelligence, autonomously managing complex data processing and analytical tasks, enabling businesses to make faster and smarter decisions. As a result, it allows data scientists and analysts to focus on solving blue sky queries and data science projects and removing the burden of ad-hoc insight and narrative generation.

The AI Imperative for Insurers

Insurers today are compelled by their existing and emerging competitors to deliver new offerings to better meet consumer needs and preferences. Recent advances in artificial intelligence, machine learning and augmented intelligence have vastly changed the analytic landscape by removing long-entrenched barriers and making advanced analytics platforms much more accessible to insurers. These new platforms have made it possible for key stakeholders such as underwriters, agents and claims adjudicators to get answers to complex business questions–like why did my claims revenue fall? or what will happen if I increase my underwriter margin by x%? and to make informed decisions based on the answers. 

Whether the goal is to maximize market share, increase profitability, optimize cost–or some combination of these–insurance stakeholders require a multipronged strategy and actionable insights to achieve their objectives. They should be able to:

  1. Analyze key signals and performance trends from various business divisions in real time.
  2. Perform root-cause analysis to arrive at key measures that affect performance and understand why and how performance can be improved. 
  3. Run multiple scenarios by changing the key inputs and impact on the targeted key performance indicator (KPI) and select the optimized strategy based on it.
  4. Design the next-best-move-based cognitive recommendations that take both internal and external factors into consideration.

Augmented intelligence uses machine learning algorithms to automate data and analytics processes, significantly reducing the time-consuming exploration, explanation, prediction and prescription analytics processes, as well as contextualizing the insights to user personas – we are talking about cutting down weeks of turnaround time across several decision support analysts, to near real time and no analyst intervention. Products that truly support advanced augmented analytics capabilities deliver on the promise of comprehensiveness and depth of insights across the value chain at the speed at which the business needs them; and because these are smart products they also overcome the challenges related to low adoption of analytics with a self-service enriched, personalized experience for the end user. 

See also: A New Burst for Augmented Reality

Solving for Various Personas

1. Maximize Productivity

2. Reduce Costs 

3. Optimize Business Processes  

Checklist for Augmented Intelligence Implementation

When implementing an augmented intelligence initiative, insurers must think in terms of the full scope and implications for the organization. A few caveats to consider before going full steam on augmented intelligence strategy are: 

1. Identify the relevant use cases to experiment — Augmented intelligence tools should ideally increase the breadth of analytics capabilities available to end users–which means use cases should be prioritized keeping this goal in mind. Additionally, rather than conducting use case discovery workshops with IT and business intelligence stakeholders alone, ensure the involvement of functional business leaders at the very onset to capture the specific business needs. This will result in smooth implementation processes as well as high adoption rates across functional roles.

2. Take stock of your use case data and infrastructure — While data is the common denominator for any successful artificial intelligence program, you also need to ensure your data has the relevant measures or drivers to run advanced analytics models. For example, if you do not account for drivers and causal factors in your claims data, the augmented analytics tool will not be able to explain the phenomena driving the changes. Additionally, augmented analytics projects require infrastructure that can support large data sets and run millions of queries and advanced machine learning models in seconds. Whether on premises or on cloud, always consider the data needs and infrastructure requirements. Ensure they are in line with the identified use cases so as not to compromise on the solution’s efficiency or speed of delivering insights.

3. Orchestrate with existing BI applications — As the name suggests, augmented intelligence “augments” the potential of your existing analytic and insights assets. Don’t consider it as a replacement to your existing dashboards or BI tools. Choose a solution that can seamlessly blend with their existing architecture and doesn’t require heavy architectural modifications. 

4. Select the right augmented intelligence partner — Your success with augmented intelligence depends on who you entrust it with to take it to the finish line. Having the relevant capabilities that can support the varied requirements as well as devise ways to overcome the common adoption hurdles associated with analytics tools is critical. Moreover, if the vendor doesn’t have a road map on how to further develop the product, or have a support team of domain experts that can help you design new use cases, chances are your experiment will meet a pre-mature death. 

See also: Untapped Potential of Artificial Intelligence

Conclusion 

The ability to rapidly respond to an uncertain environment is expected to become a new core competency. Augmented analytics should be viewed as an always-on, immersive system that guides key stakeholders and provides visibility for lines of business, teams and locations. Insurers need to graduate employees from tedious manual processes, focusing their efforts on decision-making that adds business value instead. Insurers need to think about how augmented intelligence can become a key enabler of strategic choices, and not a barrier to success.

Striking the Perfect Balance on AI

Artificial intelligence (AI) is hot. Funding for AI startups has steadily increased over the past decade as a host of innovative tech entrepreneurs have stepped forward to solve a broad range of business problems. In many respects, AI is a natural fit for the insurance industry. This business is a numbers game; it’s about understanding correlation, predicting risk and identifying trends and anomalies, so the current rush to adopt AI doesn’t come as a surprise.

AI is indeed powerful — but it’s not magic. For years, tech visionaries and Hollywood screenwriters have offered us visions of a future in which machines exhibit human-like thought patterns combined with a virtually unlimited capacity to consume and digest new information. Digital assistants like Siri and Alexa have furthered those impressions, using natural language processing to understand human speech and respond to simple requests. They have reinforced the notion that artificial general intelligence (AGI) is already upon us.

Back in 2005, Gartner published and branded its now-famous “Hype Cycle.” A new category of technology emerges, a flood of VC money follows and, eventually, most early-stage companies fail or are acquired by bigger players. Ultimately, the majority of hyped technologies succeed in providing net positive value but not before they’ve been through the stage that Gartner calls “the trough of disillusionment.” That stage can be painful; it’s where a lot of startups fail. It’s also where a lot of once-promising internal projects are relegated to the trash heap.

There are, however, some extraordinarily good reasons for the current hype around AI. Insurers are achieving remarkable efficiencies and using it to transform claims management, improve risk assessment and detect fraud. The key, generally speaking, is to use AI to augment and enrich existing business processes rather than replacing them wholesale. The human touch still matters, but it can be rendered far more effective with the assistance of natural language processing, machine learning and predictive analytics.

To be successful with AI, executives should adopt the view that this technology serves as a key component within their overarching business strategy. As such, it requires careful analysis and forethought. With the right approach, insurers can achieve impressive returns on AI investments. That’s not a future promise; it’s already happening today.

Successful deployment of AI requires a careful balance between visionary optimism and cautious pragmatism. As insurance executives plan their AI investments, here are some best practices that will help to ensure successful business outcomes:

  • Start with a list of your biggest challenges, then identify a path to solving them, one step at a time. Think beyond innovation and experimentation. Engage with stakeholders throughout your organization with the aim of identifying specific problems and operationalizing AI to achieve tangible benefits in the near term.
  • Assess the opportunity for solving those challenges by reengineering (not necessarily replacing) existing business processes to incorporate AI. Map various AI technologies against those challenges. Aim at augmenting human intelligence — not replacing it.
  • Assess your organization’s capacity to adopt and integrate AI, including fundamental competencies in AI technology, data access and data quality, change management and willingness among the target users to adopt proposed solutions.
  • Get senior management involved … but at the right time. AI is strategic, and the C-suite needs to be informed and involved. They can bring vision, energy and support to your AI initiatives, provided that you engage them at the appropriate stage in the process. If senior management is involved too early, they could specify new initiatives that are ill-suited to AI. Early C-suite involvement can also frequently lead to inflated expectations. A better approach is to bring well-considered proposals that multidisciplinary teams at ground level have fleshed out.
  • Make this a cross-functional exercise. Develop AI project teams that incorporate a range of perspectives and skill sets and don’t limit them to a single team. By creating multiple groups, each with its own dynamic and operational focus, your organization will benefit from a greater diversity of ideas.
  • Start an AI-related education and skills program now. Even though you may not be sure yet of your specific needs for retraining and reskilling, begin to make education offerings available now that will help your workers adapt to future changes. Such programs will pay dividends down the road, giving your organization a head start in the change management process.

See also: 3 Steps to Demystify Artificial Intelligence

As you evaluate technology, plan your pilot rollout and eventually operationalize AI within your company, here are some additional factors that will contribute to your success:

  • Before the pilot starts, set a timetable and criteria for deciding whether to go into production. This timetable will add rigor to the decision-making process and put pilot project advocates on notice that implementation is an important consideration from the very beginning.
  • Adopt technologies that can scale and that can be used by your intended audience. If, for example, a chatbot is ill-suited to serve your customers as a primary channel, don’t adopt it with the vague hope that it will improve substantially soon.
  • Get your data in order. AI relies on high-quality data, and it benefits from a holistic view of information enabled by integration. Assess your organization’s ability to unify and harmonize your data and to ensure its accuracy, consistency and completeness.
  • Make sure AI can interface well with your existing systems. Select initiatives to prioritize needs to consider “the last mile” of implementation. Get your IT teams involved early so they have a hand in creating a feasible solution.

Finally, it’s important to be flexible and transparent and to manage expectations. Some pilots will prove to be impracticable fairly early. That’s to be expected, but stakeholders should understand that AI pilot projects are like a portfolio of investments; some will succeed while others will not. AI isn’t the answer to every problem, but insurers that neglect to get on board will be eclipsed by those that do. Be willing to learn from successes and failures and apply that knowledge to your future endeavors.

As first published in PropertyCasualty360.

‘Explainable AI’ Builds Trust With Customers

Artificial intelligence (AI) holds a lot of promise for the insurance industry, particularly for reducing premium leakage, accelerating claims and making underwriting more accurate. AI can identify patterns and indicators of risk that would otherwise go unnoticed by human eyes. 

Unfortunately, AI has often been a black box: Data goes in, results come out and no one — not even the creators of the AI — has any idea how the AI came to its conclusions. That’s because pure machine learning (ML) analyzes the data in an iterative fashion to develop a model, and that process is simply not available or understandable. 

For example, when DeepMind, an AI developed by a Google subsidiary, became the first artificial intelligence to beat a high-level professional Go player, it made moves that were bewildering to other professional players who observed the game. Move 37 in game two of the match was particularly strange, though, after the fact, it certainly appeared to be strong — after all, DeepMind went on the win. But there was no way to ask DeepMind why it had chosen the move that it did. Professional Go players had to puzzle it out for themselves. 

That’s a problem. Without transparency into the processes AI uses to arrive at its conclusions, insurers leave themselves open to accusations of bias. These concerns of bias are not unfounded. If the data itself is biased, then the model created will reflect it. There are many examples; one of the most infamous is an AI recruiting system that Amazon had been developing. The goal was to have the AI screen resumes to identify the best-qualified candidates, but it became clear that the algorithm had taught itself that men were preferable to women, and rejected candidates on the basis of their gender. Instead of eliminating biases in existing recruiting systems, Amazon’s AI had automated them. The project was canceled.

Insurance is a highly regulated industry, and those regulations are clearly moving toward a world in which carriers will not be allowed to make decisions that affect their customers based on black-box AI. The EU has proposed AI regulations that, among other requirements, would mandate that AI used for high-risk applications be “sufficiently transparent to enable users to understand and control how the high-risk AI system produces its output.” What qualifies as high-risk? Anything that could damage fundamental rights guaranteed in the Charter of Fundamental Rights of the European Union, which includes discrimination on the basis of sex, race, ethnicity and other traits. 

Simply put, insurers will need to demonstrate that the AI they use does not include racial, gender or other biases. 

But beyond the legal requirements for AI transparency, there are also strong market forces pushing insurers in that direction. Insurers need explainable AI to build trust with their customers, who are very wary of its use. For instance, after fast-growing, AI-powered insurer Lemonade tweeted that it had collected 1,600 data points on customers and used nonverbal clues in video to determine how to decide on claims, the public backlash was swift. The company issued an apology and explained that it does not use AI to deny claims, but the brand certainly suffered as a result.

Insurers don’t need to abandon the use of AI or even “black-box” AI. There are forms of AI that are transparent and explainable, such as symbolic AI. Unlike pure ML, symbolic AI is rule-based, with codes describing what the technology has to do. Variables are used to reach conclusions. When the two are used together, it’s called hybrid AI, and it has the advantage of leveraging the strengths of each while remaining explainable. ML can target pieces of a given problem where explainability isn’t necessary.

For instance, let’s say an insurer has a large number of medical claims, and it wants AI to understand the body parts involved in the accident. The first step is to make sure that the system is using up-to-date terminology, because there may be terms used in the claims that are not part of the lexicon the AI needs to understand. ML can automate the detection of concepts to create a map of the sequences used. It doesn’t need to be explainable because there’s a reference point, a dictionary, that can determine whether the output is correct. 

See also: The Intersection of IoT and Ecosystems

The system could then capture the data in claims and normalize it. If the right shoulder is injured in an accident, symbolic AI can detect all synonyms, understand the context and come back with a code of the body part involved. It’s transparent because we can see where it’s coded with a snippet from the original report. There’s a massive efficiency gain, but, ultimately, humans are still making the final decision on the claim.

AI holds a lot of promise for insurers, but no insurer wants to introduce additional risk into the business with a system that produces unexplainable results. Through the appropriate use of hybrid AI, carriers can build trust with their customers and ensure they are compliant with regulations while still enjoying the massive benefits that AI can provide.

When AI Doesn’t Work

Although I’m a big believer in the prospects for artificial intelligence, and we’ve certainly published a lot to that effect here at Insurance Thought Leadership, AI has also carried a ton of hype since it emerged as a serious field of study in the mid-20th century. I mean, weren’t we supposed to be serving our robot overlords starting a decade or two ago?

To keep us from getting carried away, it’s good to look from time to time at the failures of AI to live up to the projections, to see what AI doesn’t do, at least not yet. And the attempts to apply AI to the diagnosis of COVID-19 provide a neatly defined study.

I’ve long believed in learning the lessons from failure, not just from successes. To that end, while Jim Collins had done the work on the patterns of success in Good to Great and Built to Last, I published a book (Billion Dollar Lessons, written with Chunka Mui) a decade-plus ago based on a massive research project into the patterns that appeared in 2,500 major corporate writeoffs and bankruptcies. You can’t just look at the handful of people who, say, won millions of dollars at roulette and declare that betting everything on red is a good strategy; you have to look at the people who lost big at roulette, too, to get the full picture.

In the case of AI, a recent article from the MIT Technology Review found that, to try to help hospitals spot or triage COVID faster, “many hundreds of predictive tools were developed. None of them made a real difference, and some were potentially harmful…. None of them were fit for clinical use [out of 232 algorithms evaluated in one study]. Just two have been singled out as being promising enough for future testing.”

Another study cited in the article “looked at 415 published tools and… concluded that none were fit for clinical use.”

What went wrong? The biggest problem related to the data, which contained hidden problems and biases.

The article said: “Many [AIs} unwittingly used a data set that contained chest scans of children who did not have COVID as their examples of what non-COVID cases looked like. But as a result, the AIs learned to identify kids, not COVID.”

One prominent model used “a data set that contained a mix of scans taken when patients were lying down and standing up. Because patients scanned while lying down were more likely to be seriously ill, the AI learned wrongly to predict serious COVID risk from a person’s position.

“In yet other cases, some AIs were found to be picking up on the text font that certain hospitals used to label the scans. As a result, fonts from hospitals with more serious caseloads became predictors of COVID risk.”

Some tools also ended up being tested on the same data they were trained on, making them appear more accurate than they are.

Other problems included what’s known as “incorporation bias” — diagnoses or labels provided for the data before it was fed to the AI were treated as truth and “incorporated” into the AI’s analysis even though those diagnoses and other labels were subjective.

I’ll add based on personal observation from 35 years of tracking AI that it’s tricky to manage, meaning that issues should be expected. The vast majority of senior executives don’t have a technical background in information technology, let alone in AI, so it’s hard for them to evaluate which AI projects will pan out and which should be set aside. Even those proposing the projects can’t know with much precision ahead of time. They can identify areas as promising, but nobody can know that they’ll hit an insight until that insight appears. Add the fact that AI carries an air of magic, which can give it the benefit of the doubt even when good, old humans might do a better job.

The article’s main general recommendation happens to be the same prescription that Chunka and I offered at the end of Billion Dollar Lessons to help head off future disasters: generate some pushback.

In our case, dealing with corporate strategy, we recommended finding a “devil’s advocate” who would look for all the reasons a strategy might fail. The person would then present them to the CEO, who otherwise is often fed a diet of affirmation by people trying hard to make the CEO’s brainchild look brilliant. Our research found that 46% of corporate disasters could have been averted because the strategies were obviously flawed.

In the case of AI, experts quoted in the MIT Technology Review article recommend finding people who could look for problems in the data and for other biases. That advice should be extended to considerations of whether a project should be attempted in the first place and whether claims made on behalf of an AI should be tempered.

As I said, I firmly believe that AI will play a major role in transforming the insurance industry. There are already scores of examples of successful implementations. I just think we’ll all be better off if we keep our eyes wide open and anticipate problems — because AI is tricky stuff, and problems are out there. The more pitfalls we can avoid, the greater our likelihood of success.

Cheers,

Paul

Building Telematics Can Mitigate Risk

Commercial general liability insurers traditionally estimate business risk exposure of similar businesses based on variables like floor area and revenue. Advances in cloud computing and artificial intelligence are combining to offer insurers new, better variables to characterize risk.

Insurers generally understand that liability risk correlates to human presence and movement. A hair salon with twice the foot traffic should present twice the slip-and-fall risk. More expensive haircuts may reflect a business customer’s greater ability to pay but probably do not increase slip-and-fall risk. Indeed, risk should correlate linearly with foot traffic unless (1) traffic is so high that conditions become over-crowded and the risk accelerates, or (2) the building falls unoccupied. Measuring foot traffic and occupancy can also confirm that the insured’s description of its business corresponds to its actual business.

Progressive Insurance introduced new attributes to characterize driving behavior when it pioneered automotive telematics in the late 1990s, an early practice of usage-based insurance (UBI). Rather than insure an automobile based simply on the vehicle’s make/model and age and the driver’s sex and age, insurers could introduce newly observable attributes to better model risk:  distance, speed, time of day, etc.

Twenty-five years later, a similar revolution is stirring in building insurance. Advances in cloud computing, artificial intelligence, semiconductors and the internet of things (IoT) make it practical and inexpensive to measure foot traffic and occupancy. Rather than depending on the policyholder to estimate human presence, a process unlikely to deliver numbers that can be compared across businesses, human presence can be measured objectively and continuously. The information will also deliver an actuarial  basis for risk assessment over time.

Risk engineers are eminently capable of characterizing variables like floor surface, lighting and door placement. However, variables like occupancy that change continuously are effectively impossible to characterize during an annual visit.  

These sensors are not your father’s IoT. IoT that measures temperature, lighting, sound intensity, hail stone size or flood level are all first-generation devices that require negligible processing power, either at the edge or in the cloud. The new generation of IoT requires high-performance, low-power, edge computing devices to predict risk, not simply measure what is empirically evident.

Some insurers think of IoT data as the new FICO (consumer credit) scores for businesses. If a hotel’s ballrooms are always below the limit set by the fire marshal, that implies hotel management is willing to play by the rules. If restaurants and bars do not overcrowd their spaces, they are less likely to obstruct exits or understaff operations. Attention to the rules implies lower risk…and that business may be one the insurer will want to retain with lower premiums.

Foot traffic and occupancy data should be of value to the business owner as well as the insurer — if for different reasons. A cafeteria may want to use foot traffic data to plan food preparation to minimize food waste. Office tenants can use occupancy data for space planning: Does the business need more, less or different space in the coming year? A restaurant owner might want to compare receipts to foot traffic and customer dwell time to measure the effectiveness of sales staff. Does a business efficiently use its real estate? How does a company compare with its peers? Are there opportunities to use real estate more efficiently?

It is likely that not all policymakers will welcome a technology that measures occupancy — in the same way not all drivers have welcomed technologies that measure driving behavior. Conversely, businesses that welcome the sensors are likely to self-select as attentive to overcrowding… and reflect a lower risk. And once the sensors are in place, reverse moral hazard suggests that insureds will improve their behavior — justifying a discount offered in exchange for accepting the sensors.

Insurers can gain market share by identifying lower-risk properties and offering discounts. Higher-risk properties will see higher premiums and will either need to work with their insurers to reduce risk or will need to find new insurers — probably one that isn’t employing building telematics technology. The outcome of this trend is that overall commercial general liability (CGL) premiums will decline, in part because high-risk properties will be obliged to work to lower their risk profile.

With risk profile information in hand, property insurance may move to the embedded-insurance model, where insurance is provided by the property owner who is equipped to measure occupancy — and risk — in real time. If your staff is at home during a pandemic, premiums drop contractually. If you double the number of staff in a space, premiums rise. More tenants pay a fair price for CGL insurance, and more tenants are suitably insured.

Occupancy and foot traffic will not be the last variables to be quietly but accurately measured by Internet of Things sensors. Other attributes that will be able to be measured include the presence of adults versus children; whether persons are running or walking or sitting; the presence of door mats when it has rained.    

As the cost of semiconductors, cloud computing and cellular connectivity continues to decline, sensors will be cheaper to install and manage. At the same time, underwriters and actuaries will be able to accumulate new, invaluable data that more accurately assess risk and reduce the insurance costs of the 75% of customers who, until now, have been subsidizing the other 25% — now that we finally know who’s who.