Tag Archives: modeling

Heading Toward a Data Disaster

On July 6, 1988, the Piper Alpha oil platform exploded. 167 people died. Much of the insurance was with what became known as the London Market Excess of Loss (LMX) Spiral, a tightly knit and badly managed web of insurance policies. Losses cascaded up and around the market. The same insurers were hit again and again. After 14 years, all claims had finally been settled. The cost exceeded $16 billion, more than 10 times the initial estimate.

The late 1980s were a bad time to be in insurance. Piper Alpha added to losses hitting the market from asbestos, storms in Europe and an earthquake in San Francisco. During this time, over 34,000 underwriters and Lloyd’s names paid out between £100,000 and £5 million. Many were ruined.

Never the same again

In the last 30 years, regulation has tightened, and analytics have improved significantly. Since 1970, 19 of the largest 20 catastrophes were caused by natural hazards. Only one, the World Trade Center attack in 2001, was man-made. No insurance companies failed as a result of any of these events. Earnings may have been depressed and capital taken a hit, but reinsurance protections behaved as expected.

But this recent ability to absorb the losses from physically destructive events doesn’t mean that catastrophes will never again be potentially fatal for insurers. New threats are emerging. The modeling tools of the last couple of decades are no longer sufficient.

Lumpy losses

Insurance losses are not evenly distributed across the market. Every year, one or more companies still suffer losses out of all proportion to their market share. They experience a “private catastrophe.” The company may survive, but the leaders of the business frequently experience unexpected and unwanted career changes.

See also: Data Prefill: Now You See It, Now You Don’t  

In the 1980s, companies suffered massive losses because the insurance market failed to appreciate the increasing connectivity of its own exposures and lacked the data and the tools to track this growing risk. Today, all companies have the ability to control their exposures to loss from the physical assets they insure. Managing the impact of losses to intangible assets is much harder.

A new class of modelers

The ability to analyze and manage natural catastrophe risk led to the emergence of a handful of successful natural catastrophe modeling companies over the last 20 years. A similar opportunity now exists for a new class of companies to emerge that can build the models to assess the new “man-made” risks.

Risk exposure is increasingly moving toward the intangible values. According to CB Insights, only 20% of the value of the S&P 500 companies today is made up of physical assets. It was 80% 40 years ago. The non-physical assets are more ephemeral, such as reputation, supply networks, IP and cyber.

Major improvements in safety procedures, risk assessment and the awareness of the destructive potential of insurance spirals makes a repeat of the type of loss seen after Piper Alpha extremely unlikely. The next major catastrophic losses for the insurance market are unlikely to be physical. They will occur because of a lack of understanding of the full reach, and contagion, of intangible losses.

The most successful new analytic companies of the next two decades will include those that are key to helping insurers measure and manage their own exposures to these new classes of risk.

The big data deception

Vast amounts of data are becoming available to insurers. Both free open data and tightly held, transactional data. Smart use of data is expected to radically change how insurers operate and create opportunities for new entrants into the market. Thousands of companies have already emerged in the last few years offering products to help insurers make better decisions about risk selection, price more accurately, service clients better, settle claims faster and reduce fraud.

But too much data, poorly managed, blurs critical signals. It increases the risk of loss. In less than 20 years, the industry has moved from being blinded by lack of data to being dazzled by the glare of too much.

The introduction of data governance processes and compliance officers became widespread in banks after the 2008 credit crunch. Most major insurance companies have risk committees and all are required to maintain a risk register. Yet ensuring that data management processes are of the highest quality is not always a board-level priority.

Looking at the new companies attracting attention and funding, very few appear to be offering solutions to help insurers solve this problem. Some, such as CyberCube, offer specific solutions to manage exposure to cyber risk across a portfolio. Others, such as Atticus DQPro, are quietly deploying tools across London and the U.S. to help insurers keep on top of their own evolving risks. Providing excellent data compliance and management solutions may not be as attention-grabbing as artificial intelligence or blockchain, but they offer a higher probability of being successful with innovations in an otherwise crowded space.

Past performance is no guide to the future, but, as Mark Twain noted, even if history doesn’t repeat itself, it often rhymes. Piper Alpha wasn’t the only nasty surprise in the last 30 years. Many events had a disproportional impact on one or more companies. The signs of impending disaster may have been blurred, but not invisible. Some companies suffered more than others. Jobs were lost. Each event spawned new regulation. But these events also created opportunities to build companies and products to prevent a future repeat. Looking for a problem to solve? Read on.

1. Enron Collapse (2001)

Enron, one of the most powerful and largest companies in the world, collapsed once shareholders realized the company’s success had been dramatically (and fraudulently) overstated. Insurers lost $3.5 billion from collapsed securities and insurance claims. Chubb and Swiss Re each reported losses of over $700 million. Jeff Skilling, CEO, spent 14 years in prison. One of the reasons for poor internal controls was that bonuses for the risk management team were influenced by appraisals from the people they were meant to be policing.

2. Hurricane Katrina and the Floating Casinos (2005)

At $83 billion, Hurricane Katrina is still the largest insured loss ever. No one anticipated the scale of the storm surge, the failure of the levies and the subsequent flooding. There were a lot of surprises. One of the large contributors to loss, from property damage and business interruption, were the floating casinos, ripped from their moorings and torn apart. Many underwriters had assumed the casinos were land-based, unaware that Mississippi’s 1990 law legalizing casinos had required all gambling to take place offshore.

3. Thai Flood Losses (2011)

After heavy rainfall lasting from June to October 2011, seven major industrial zones in Thailand were flooded to depths of up to 3 meters. The resulting insurance loss is the 13th-largest global insured loss ever ($16 billion in today’s value). Before 2011, many insurers didn’t record exposures in Thailand because the country was never considered a catastrophe-prone area. Data on the location and value of the large facilities of global manufacturers wasn’t offered or requested. The first time insurers realized that so many of their clients had facilities so close together was when the claims started coming in. French reinsurer CCR, set up primarily to reinsure French insurers, was hit with 10% of the total losses. Munich Re, along with Swiss Re, paid claims in excess of $500 million and called the floods a “wake-up call.”

See also: The Problems With Blockchain, Big Data  

4. Tianjin Explosion (2015)

With an insured loss of $3.5 billion, the explosions at the Tianjin port in China are the largest man-made insurance loss in Asia. The property, infrastructure, marine, motor vehicle and injury claims hit many insurers. Zurich alone suffered close to $300 million in losses, well in excess of its market share. The company admitted later that the accumulation was not detected because different information systems did not pick up exposures that crossed multiple lines of business. Martin Senn, the CEO, left shortly afterward.

5. Financial Conduct Authority Fines (2017 and onward)

Insurers now also face the risk of being fined by regulators and not just from GDPR-related issues. FCA, the U.K. regulator, levied fines of £230 million in 2017. Liberty Mutual Insurance was charged £5 million (failure in claims handling by a third party) and broker Blue Fin £4 million (not reporting a conflict of interest). Deutsche Bank received the largest fine of £163 million for failing to impose adequate anti-money laundering processes in the U.K., topped up later by a further fine of $425 million from the New York Department of Financial Services.

Looking ahead

“We’re more fooled by noise than ever before,” Nicholas Taleb writes in his book Antifragile.

We will see more data disasters and career-limiting catastrophes in the next 20 years. Figuring out how to keep insurers one step ahead looks like a great opportunity for anyone looking to stand out from the crowd in 2019.

The Robocalypse for Knowledge Jobs

Long-time Costa Rican National Champion Bernal Gonzalez told a very young me in 1994 that the world’s best chess-playing computer wasn’t quite strong enough to be among the top 100 players in the world.

Technology can advance exponentially, and just three years later world champion Garry Kasparov was defeated by IBM’s chess playing supercomputer Deep Blue. But chess is a game of logic where all potential moves are sharply defined and a powerful enough computer can simulate many moves ahead.

t1

Things got much more interesting in 2011, when IBM’s Jeopardy-playing computer Watson defeated Ken Jennings, who held the record of winning 74 Jeopardy matches in a row, and Brad Rutter, who has won the most money on the show. Winning at Jeopardy required Watson to understand clues in natural spoken language, learn from its own mistakes, buzz in and answer in natural language faster than the best Jeopardy-playing humans. According to IBM, ”more than 100 different techniques are used to analyze natural language, identify sources, find and generate hypotheses, find and score evidence and merge and rank hypotheses.” Now that’s impressive — and much more worrisome for those employed as knowledge workers.

t2

What do game-playing computers have to do with white collar, knowledge jobs? Well, Big Blue didn’t spend $1 billion developing Watson just to win a million bucks playing Jeopardy. It was a proof of concept and a marketing move. A computer that can understand and respond in natural language can be adapted to do things we currently use white collar, educated workers to do, starting with automating call centers and, sooner rather than later, moving on up to more complex, higher-level roles, just like we have seen with automation of blue collar jobs.

In the four years since its Jeopardy success, Watson has continued advancing and is now being used for legal research and to help hospitals provide better care. And Watson is just getting started. Up until very recently, the cost of using this type of technology was in the millions of dollars, making it unlikely that any but the largest companies could make the business case to replace knowledge jobs with AIs (artificial intelligence). In late 2013, IBM put Watson “on the cloud,” meaning that you can now rent Watson time without having to buy the very expensive servers.

Watson is cool but requires up-front programming of apps for very specific activities and, while incredibly smart, lacks any sort of emotional intelligence, making it uncomfortable for regular people to deal with it. In other words, even if you spent the millions of dollars to automate your call center with Watson, it wouldn’t be able to connect with your customer, because it has no sense of emotions. It would be like having Data answering your phones.

Then came Amelia…

t3

Amelia is an AI platform that aims to automate business processes that up until now had required educated human labor. She’s different from Watson in many ways that make her much better-suited to actually replace you at the office. According to IPsoft, Amelia aims at working alongside humans to “shoulder the burden of tedious, often laborious tasks.”

She doesn’t require expensive up-front programming to learn how to do a task and is hosted on the cloud, so there is no need to buy million-dollar servers. To train her, you literally feed her your entire set of employee training manuals, and she reads and digests them in a matter of a few seconds. Literally, just upload the text files, and she can grasp the implications and apply logic to make connections between the concepts. Once she has that, she can start working customer emails and phone calls and even recognize what she doesn’t know and search the Internet and the company’s intranet to find an answer. If she can’t find an answer, then she’ll transfer the customer to a human employee for help. You can even let her listen to any conversations she doesn’t handle herself, and she literally learns how to do the job from the existing staff, like a new employee would, except exponentially faster and with perfect memory. She also is fluent in 20 languages.

Like Watson, Amelia learns from every interaction and builds a mind-map that eventually is able to handle just about anything your staff handled before. Her most significant advantage is that Amelia has an emotional component to go with her super brains. She draws on research in the field of affective computing, “the study of the interaction between humans and computing systems capable of detecting and responding to the user’s emotional state.” Amelia can read your facial expressions, gestures, speech and even the rhythm of your keystrokes to understand your emotional state, and she can respond accordingly in a way that will make you feel better. Her EQ is modeled in a three-dimensional space of pleasure, arousal and dominance through a modeling system called PAD. If you’re starting to think this is mind-blowing, you are correct!

The magic is in the context. Instead of deciphering words like insurance jargon when a policyholder calls in to add a vehicle or change an address, IPsoft explains that Amelia will engage with the actual question asked. For example, Amelia would understand the same requests that are phrased different but essentially mean the same thing: “My address changed” and “I need to change my address.” Or, “I want to increase my BI limits” and “I need to increase my bodily injury limits”.

Amelia was unveiled in late 2014, after a secretive 16-year-long development process, and is now being tested in the real world at companies like Shell Oil, Accenture, NNT Group and Baker Hughes on a variety of tasks from overseeing a help desk to advising remote workers in the field.

t4

Chetan Dube, long-time CEO of IPSoft, Amelia’s creator, was interviewed by Entrepreneur magazine:

“A large part of your brain is shackled by the boredom and drudgery of everyday existence. […] But imagine if technology could come along and take care of all the mundane chores for you, and allow you to indulge in the forms of creative expression that only the human brain can indulge in. What a beautiful world we would be able to create around us.”

His vision sounds noble, but the reality is that most of the employees whose jobs get automated away by Watson, Amelia and their successors, won’t be able to make the move to higher-level, less mundane and less routine tasks. If you think about it, a big percentage of white collar workers have largely repetitive service type jobs. And even those of us in higher-level roles will eventually get automated out of the system; it’s a matter of time, and less time than you think.

I’m not saying that the technology can or should be stopped; that’s simply not realistic. I am saying that, as a society, there are some important conversations we need to start having about what we want things to look like in 10 to 20 years. If we don’t have those discussions, we are going to end up in a world with very high unemployment, where the very few people who hold large capital and those with the STEM skills to design and run the AIs will do very well, while the other 80-90% of us could potentially be unemployable. This is truly scary stuff, McKinsey predicts that by 2025 technology will take over tasks currently performed by hundreds of millions of knowledge workers. This is no longer science fiction.

As humans, our brains evolved to work linearly, and we have a hard time understanding and predicting change that happens exponentially. For example, merely 30 years ago, it was unimaginable that most people would walk around with a device in their pockets that could perform more sophisticated computing than computers at MIT in the 1950s. The huge improvement in power is a result of exponential growth of the kind explained by Moore’s law, which is the prediction that the number of transistors that fit on a chip will double every two years while the chip’s cost stays constant. There is every reason to believe that AI will see similar exponential growth. Just five years ago, the world’s top AI experts at MIT were confident that cars could never drive themselves, and now Google has proven them wrong. Things can advance unimaginably fast when growth becomes exponential.

t5

Some of the most brilliant minds of our times are sounding the alarm bells. Elon Musk said, “I think we should be very careful about AI. If I had to guess, our biggest existential threat is probably that we are summoning the demon.” Stephen Hawking warned, “The development of full-artificial intelligence could spell the end of the human race.”