Download

Maine Says: Buy Your Own Marijuana

Maine joined the list of states that preclude worker’s compensation coverage for the cost of medical marijuana used to treat a workplace injury.

Bourgoin v. Twin Rivers Paper Co. (SJC Maine, June 14, 2018) Maine has just joined the list of states that preclude worker’s compensation coverage for the cost of medical marijuana used to treat a workplace injury. Bourgoin sustained a workplace injury that caused him to suffer chronic back pain and total disability. After receiving a certification to use medical marijuana, he obtained an order from the Worker’s Compensation Board directing Twin Rivers, his former employer, to pay for the cost of the marijuana. He found that marijuana was more effective and had fewer side effects than the opioid drugs he had been prescribed. Twin Rivers appealed, arguing that the federal Controlled Substances Act (“CSA”) prevented an employer from paying for marijuana even when it is legal under state law. The Maine Supreme Court acknowledged that the CSA expressly disclaimed “field preemption” but held that there was an inevitable conflict between state law that ordered an employer to pay for marijuana and the CSA, which criminalizes marijuana. The court concluded that, if Twin Rivers subsidized the cost of marijuana as a worker’s compensation benefit, it would inevitably be aiding and abetting a federal crime, which is itself criminal activity. The court emphasized that “the magnitude of the risk of criminal prosecution is immaterial in this case. Prosecuted or not, Twin Rivers would be forced to commit a federal crime if it complied with the directive of the Worker’s Compensation Board.” Two justices dissented. This article was written with Meghan Shiner.

The Case for a Hybrid Business Model

The firm of the future needs to combine the innovative thinking of an insurtech with the experience and knowledge of a traditional insurer.

Change is a funny thing. The people who say they embrace it sound smart, for sure. But saying you embrace something and actually embracing it can be different things. The fact is, for many insurance agents and companies, the old model works. They benefit from customers’ fears and lack of knowledge. In a perverse way, if a customer is fearful or lacks sufficient knowledge, the customer becomes dependent on someone else. That dependency is satisfied by someone from the traditional insurance industry. So if you’re a traditional insurance agent or company, why change? Name another industry where the courage to not change outpaces the courage to change. There’s probably one out there, but I can’t think of it. Fortunately for the customer, things are starting to change. Making Customer the King The traditional insurance model was built in a time when today’s technology didn’t exist. That’s to be expected, but the insurance industry fell further behind by failing to make the first technological leaps that most forward-thinking companies, including other financial services companies, made in response to our digital age that began years ago. I won’t talk about all those leaps, because they’re well-documented. Let’s just skip ahead to the part where the customer expects to be fully in charge. As insurance companies updated their technology, they bolted new technology on top of their old systems, failing to meet customer expectations. In the process, companies focused inward, instead of focusing on the user interface and customer experience. See also: Industry 4.0: What It Means for Insurance   Some startups, with awesome technology acumen but little to no insurance background, have appeared over the last few years, accelerating change in the industry. Shiny new websites are beginning to make it easy for customers to get coverage in seconds, lending the same familiar and seamless online experience to insurance that customers experience when they purchase other products from their desktop, laptop or mobile device. That’s a great step for the industry. Unfortunately, it’s not quite enough. Something Old and Something New While building an online experience where the customer gains knowledge and confidence to adopt a new insurance purchase experience is great, customers are also going to need confidence that, if something goes wrong, the coverage they bought will really protect them. That takes more than a user-friendly website. It takes real insurance knowledge and experience. Most, if not all, new technology providers that have built a great user interface must pass the customer on to a traditional insurance company once it’s time to provide real customer service or pay a claim. That type of handoff interrupts the customer experience and simply won’t work. See also: The Industry Needs an Intervention   The companies that will succeed at bringing real change to the insurance industry will need a blend of sophisticated technology and insurance expertise. Combining the innovative thinking of an insurtech with the experience and knowledge of the traditional insurance industry under one roof is rare today. But it’s this type of unique hybrid model that will become the new standard for companies that will lead the insurance industry for years to come.

Blockchain's Future in Insurance

In the short term, blockchain can be applied to data exchange and storage, P2P electronic payments and smart contracts.

Blockchain is a revolutionary technology that is likely to have a far-reaching impact on business – on a par with the transformative effect of the internet. Not surprisingly, the huge potential promised by blockchain has prompted a flurry of research activity across different sectors as diverse organizations race to develop applications. In this article, we’ll explore the many benefits that blockchain could bring to the insurance industry and the different challenges that will need to be overcome.

Overview

Blockchain has strong potential in the short and long term in several different areas, particularly where it links with emerging technologies such as the Internet of Things (IoT) and artificial intelligence (AI). But its potential for delivering new applications also depends on the development of blockchain technology itself. In the medium and short term, there are three categories where blockchain can be applied:

  • Data storage and exchange: Numerous data and files can be stored using blockchain. The technology provides for more secure, traceable records compared with current storage means.
  • Peer-to-peer electronic payment: Bitcoin (and other blockchain-based cash systems) is a cryptographic proof-based electronic payment system (instead of a trust-based one). This feature is highly efficient while ensuring transparent and traceable electronic transfer.
  • Smart contracts: Smart contracts are digital protocols whereby various parameters are set up in advance. When pre-set parameters are satisfied, smart contracts can execute various tasks without human intervention, greatly increasing efficiency.

Data storage and peer-to-peer electronic transfer are feasible blockchain applications for the short term. At this stage, the technical advantages of blockchain are mainly reflected in data exchange efficiencies, as well as larger-scale data acquisition.

See also: The Opportunities in Blockchain

Smart contracts via blockchain will play a more important role in the medium to long term. By that time, blockchain-based technology will have a far-reaching impact on the business model of insurance companies, industrial management models and institutional regulation. Of course, there will be challenges to overcome, and further technological innovation will be needed as blockchain’s own deficiencies or risks emerge during its evolution. But just like internet technology decades ago, blockchain promises to be a transformative technology.

Scenarios for blockchain applications in insurance

Macro level

Proponents of blockchain technology believe it has the power to break the data acquisition barrier and revolutionize data sharing and data exchange in the industry. Small and medium-sized carriers could use blockchain-based technology to obtain higher-quality and more comprehensive data, giving them access to new opportunities and growth through more accurate pricing and product design in specific niche markets. At the same time, blockchain-based insurance and reinsurance exchange platforms – that could include many parties – would also upgrade industry processes.

For example, Zhong An Technology is currently working closely with reinsurers in Shanghai to try to establish a blockchain reinsurance exchange platform.

Scenario 1 – Mutual insurance Blockchain is a peer-to-peer mechanism, via the DAO (decentralized autonomic organization) as a virtual decision-making center, and premiums paid by each and every insured are stored in the DAO. Each and every insured participant has the right to vote and therefore decide on final claim settlement when a claim is triggered. Blockchain makes the process transparent and highly efficient with secure premium collection, management and claim payment thanks to its decentralization. In China, Trust Mutual Life has built a platform based on blockchain and biological identification technology. In August 2017, Trust Mutual Life launched a blockchain-based mutual life insurance product called a “Courtesy Help Account,” where every member can follow the fund. Plus, the platform reduces operational costs more than a traditional life insurance company of a similar size.

Scenario 2 – Microinsurance (short-term insurance products for certain specific scenarios) An example of short-term insurance could be for car sharing or providers of booking and renting accommodation via the internet. Such products are mainly pre-purchased by the service provider and then purchased by end users. However, blockchain makes it possible for end users to purchase insurance coverage at any time based on their actual usage, inception and expiring time/date. In this way, records would be much more accurate and therefore avoid potential disputes.

Scenario 3 – Automatic financial settlement The technical characteristics of blockchain have inherent advantages in financial settlement. Combined with smart contracts, blockchain can be applied efficiently and securely throughout the entire process of insurance underwriting, premium collection, indemnity payment and even reinsurance.

Micro level

Blockchain has the potential to change the pattern of product design, pricing and claim services.

Parametric insurance (e.g. for agricultural insurance, delay-in-flight insurance, etc.): Parametric insurance requires real-time data interface and exchange among different parties. Although it is an efficient form of risk transfer, it still has room for further cost improvement. Taking parametric agricultural insurance and flight delay insurance as examples, a lot of human intervention is still required for claim settlement and payment. With blockchain, the efficiency of data exchange can be significantly improved. Smart contracts can also further reduce human intervention in terms of claim settlement, indemnity payment, etc., which will significantly reduce the insurance companies’ operating costs. In addition, operating efficiency is increased, boosting customer satisfaction.

Some Chinese insurers are already working on blockchain-based agricultural insurance. In March 2018, for example, PICC launched a blockchain-based livestock insurance platform. Currently, the project is limited to cows. Each cow is identified and registered in the blockchain-based platform during its whole life cycle. All necessary information is uploaded and stored in real time in the platform. Claims are triggered and settled automatically via blockchain. The platform also serves as an efficient and reliable food safety tracing system.

Auto insurance, homeowners insurance:  Blockchain has wider application scenarios in the field of auto insurance and homeowners insurance when combined with the IoT. There are applications from a single vehicle perspective as well as portfolios as a whole. From a standalone vehicle perspective, the complete history of each vehicle is stored in blocks. This feature allows insurers to have access to accurate information on each and every vehicle, plus maintenance, accidents, vehicle parts conditions, history and the owner’s driving habits. Such data facilitates more accurate pricing based on dedicated information for each and every single vehicle.

From the insured’s point of view, the combination of blockchain and IoT effectively simplifies the claims service process and claim settlement efficiency. From the perspective of the overall vehicle, blockchain and IoT can drastically lower big data acquisition barriers, especially for small  and medium-sized carriers. This will have a positive impact on pricing accuracy and new product development in auto insurance.

Taking usage-based insurance (UBI) for autos as an example, it’s technically possible to record and share the exact time and route of an insured vehicle, meaning that UBI policies could be priced much more accurately. Of course, insurers will have to consider how to respond in situations where built-in sensors in the insured vehicle break or a connection fails. Furthermore, insurance companies also have to decide whether an umbrella policy is needed on top of the UBI policy, to control their exposure when such situations occur.

Cargo insurance: Real-time information sharing of goods, cargo ships, vehicles, etc. is made possible with blockchain and the IoT. This will not only improve claims service efficiency but also help to reduce moral hazards. In this regard, Maersk, EY Guardtime and XL Catlin recently launched a blockchain-based marine insurance platform cooperation project. Its aim is to facilitate data and information exchange, reduce operating costs among all stakeholders and improve the credibility and transparency of shared information.

International program placement and premium/claims management: Blockchain-based technology allows insurance companies, brokers and corporate risk managers to improve the efficiency of international program settlement and daily management, at the same time reducing data errors from different countries and regions and avoiding currency exchange losses.

Coping with claim frauds: Blockchain is already being applied to verify the validity of claims and the amount of adjustment. In Canada, the Quebec auto insurance regulator (Québec Auto Insurance) has implemented a blockchain-based information exchange platform. Driver information, vehicle registration information, the vehicle’s technical inspection result, auto insurance and claims information, etc. are all shared through the platform. The platform not only reduces insurance companies’ operating costs but also effectively helps to reduce fraud. All insurance companies that have access to the platform receive a real-time notice when a vehicle is reported to be stolen. Insurance companies have full access to every vehicle’s technical information, which promotes more accurate pricing for individual policyholders.

Claims settlement: Using a smart contract, the insured will automatically receive indemnity when conditions in the policy are met: Human intervention will not be needed to adjust the settlement. In the future, some insurance products will effectively be smart contracts whereby coverages, terms and conditions are actually the parameters of the smart contract. When the parameters are met, policies are triggered automatically by the smart contract and a record stored in the blockchain. Business models like this will not only build higher trust in the insurance company but will also greatly increase its operational efficiency, reducing costs; it will also help to reduce moral hazard.

Internal management systems: Internal management systems could be automated through use of blockchain and smart contracts, helping to improve management efficiency and reduce labor costs as well as the efficiency of compliance audit.

See also: How Insurance and Blockchain Fit

Challenges and problems Decentralization strengthens information sharing and reduces the monopoly advantages that information asymmetry provides. Under such circumstances, insurance companies have to pay more attention to pricing, product development, claims services and even reputation risk. All this adds up to new challenges for the company management. At the same time, every aspect of the insurance industry must be more focused on ensuring the accuracy of original information at the initial stage of its business. Knowing how to respond to false declarations from insureds will be crucial.

From a more macro perspective, “localized blocks” of data will be inevitable in the early phase of development in line with the pace of technical development and regulatory constraints. In theory, it is impossible to hack blockchain, but data protection will be an issue for localized blocks. Therefore, higher cyber security protection will be required to protect these localized blocks. The interaction of blockchain with other technologies could mean that existing intermediary roles are replaced by new technologies in different sectors. If the insurance industry wants to ensure the continuous development of the intermediary, it should address the possible disruptive risks to existing distribution business models posed by blockchain.

The necessary investment (both tangible and intangible costs) associated with adopting blockchain technology is a big consideration for many companies at this stage. Insurance companies and reinsurance companies operate numerous systems, and the decision to integrate blockchain-based technology/platform shouldn’t be taken lightly. At the current stage of blockchain evolution, this could be one of the biggest obstacles facing insurers.

Overall, blockchain is an inspiring prospect, and there is every reason to believe that this technological breakthrough will bring positive effects to individual insurers everywhere. But at the same time, we need to understand the mutual challenges that lie ahead and work together to promote our industry’s development in what promises to be an exciting new era.

Download PDF version for endnotes and further reading.


An Wang

Profile picture for user AnWang

An Wang

An Wang is associate underwriting director, property and engineering facultative, at Gen Re.

How IoT May Revolutionize Claims

The IoT can facilitate and improve claims management, adding even greater value than through its ability to reduce risks.

In a previous article, I addressed the potential for the Internet of Things (IoT) to help P&C insurers reduce non-catastrophic losses in the homeowners insurance sector. Internet-connected products, such as advanced home security systems, water sensors and smoke alarms, are beginning to demonstrate the potential for reduced losses primarily through early notification of emergencies both to consumers as well as directly to emergency responders. But the potential insurer benefit from IoT does not stop at loss reduction; in fact this may be just the beginning. In this article I will address, at a high-level, how IoT can potentially facilitate and improve the claims management process adding even greater value for those insurers that embrace the technology in their operations. Overview Claims management has always been a challenging part of insurance operations. In additional to the time and expense incurred by the insurer processing and managing claims, the claims process tends to be particularly unfriendly to customers. Even if unintended, bureaucratic processes and slow response times can give consumers the impression that insurers are intentionally delaying, or worse, looking to avoid pay-outs. IoT has potential to automate data collection and communication processes while proving a “record of truth” for events leading to a claim – resulting in lower costs to the insurer and a better customer experience. Let’s look at a few specific ways IoT may make a difference in managing your claims process. First Notice of Loss (FNOL) Relying on a consumer to initiate a claim introduces a level of risk and uncertainty for the insurer while simultaneously lessens the quality of the overall customer experience. After a loss, especially a major one, the last thing a consumer wants to do is to pick up a phone and talk to a claims rep. Yes, arguably a courteous and compassionate rep may give a consumer some level of “piece-of-mind.” But candidly customers facing a loss are anxious their insurer is going to be difficult or uncooperative. I would argue that there are better ways to give the customer piece of mind then requiring them to initiate a communication with a call center. IoT devices can initiate the FNOL process on behalf of the customer and can do so, arguably, in a more compassionate and helpful way. Device alarms can notify the insurer of a potential loss long before the consumer contacts your claims department. The device triggering the notification has a time/date stamp which is of course associated to a customer address and policy number – basically, all the data needed to begin a claim is already available without the customer being inconvenienced. With this data in-hand, insurers might consider a first point of human contact being an outreach message of care and concern. Even if validation or confirmation is required to formally begin the FNOL process this proactive approach would be a unique experience for the customer whose original thought is “oh no, am I covered?” This proactive outreach may also have potential in dealing with a growing problem of “Assignment of Benefit” the home owners space is starting to see. Engaging a customer a time of loss better positions insurers to guide the customer through more structured remediation processes from the start. An insurer’s first communication might include service provider recommendations based on the loss data from the IoT devices. This process may result in helping ensure that customers are aligned with trusted vendors versus hoping for the best that the situation is resolved properly, in addition to raising the customer service bar. See also: Insurance and the Internet of Things   Claims support and fraud investigation Back to point one, at the time of loss, the last thing a customer wants to do is speak to an insurance company. An insured may be rattled and not paying attention to details that may later become critical to proper assessment of a claim, or worse, a fraud prevention situation. But IoT devices are computers and their data is never subjective nor emotional – it is a record-of-truth. It also doesn’t make a difference if a customer is home or away during an incident, the IoT devices responds the same way registering whatever data it is set to convey. IoT devices may also be labeled by users with indicators of location and use data – “motion sensor in bedroom”, “leak detector under hot water tank”, “smoke alarm in basement.” The naming of the devices with this level of specificity is a normal customer behavior and one that could enhanced when IoT kits are purposed built to not only help avoid loss but also to facilitate the claims process. These devices also may be recording “normal state” to provide a record of a situation prior to a loss. Claims processing also doesn’t need to be limited to a single IoT data point. A Smart Home is likely to contain an ecosystem of sensors, many of which have data that can be leveraged in a claim review. For instance, say a connected smoke alarm trips, if the insurer has access to other sensors in the house (e.g. thermostat) a second device data set could confirm the incident with stronger certainty. Obviously these are never-before-available data points that can both help expedite claims processing and potentially lower fraud risk. Remediation monitoring and confirmation While the role of a claims adjuster isn’t going anywhere for some time, IoT devices have the potential to provide an early indication of the severity of a claim: what sensors were triggered, their location in the home, response time before a user disarms or acknowledges, etc. For example, a whole-home water sensor may be able to indicate that 200 gallons were dispersed and 5 water sensors in the home may have triggered, indicating a potential moderate to large loss. And while the confirmation of damage scope may still need to come from the customer or adjuster, there is an early indication on the potential loss – and the urgency of remediation. A bit more futuristic, but IoT devices may also play a role in monitoring the remediation process. Devices may confirm the work process or the completion of work. Humidity or air quality sensors may confirm that a premise is “back to normal state.” Video imagery from IoT devices may record before, during and after imagery to confirm work done to a standard. Sensors may even reset or re-calibrate when all parties have finally agreed “work complete.” This process may even be triggered by the Insurers system of record in claims processing. Once the sensor is seen as an enabling tool for remediation, I believe the customer will see it as an incredibly supportive piece of technology provided by their partner, their insurer. Considerations Of course, with all these technologies, there are important considerations on behalf of both the insurer as well as the customer. Customer awareness and privacy – Concerns around “big brother” and privacy are very real – but the risk can be well managed over the long run. However, consumers must feel the risk/reward ratio is balanced. The best way to do achieve this is through transparency. Let your customer know you’re your objective is to create a better product – and better yet, show them by sharing the benefit in return for your access to their sensor data. Engage the customer and ask them for their preferences when it comes to things like communications or emergency outreach. Consumers are very savvy and tomorrow’s technology requires a level of transparency yet to be seen by the market. Encourage your customers to be part of the journey. False positives – Computers don’t exaggerate or lie, but sensors can be accidentally triggered. Early on there will be misses and false positives and some will be costly, but that is typical in all early technology cycles. Overall benefit to all insureds will be seen quickly if the process is allowed to mature. As consumers begin to realize the value of Smart Home goes beyond simply lights and doorbells into true home safety (and insurance benefits) engagement will grow and understanding the best course of action with the data sets that available will become clear to everyone involved. If need an example illustration, just look at telematics and auto insurance. See also: Smart Home = Smart Insurer!   Costs – IoT device volumes are still relatively low and functionality is still limited. As the insurance industry commits to a vision, deployments will become more widespread driving down price and raising the quality bar. This is highly consistent with any early technology deployment. If insurers engage, companies will double down efforts to meet the industry’s specific needs. And, no doubt, new business models will emerge to support the economics. Conclusion The IoT market is maturing quickly, and one of the biggest beneficiaries may be the insurance industry. Looking at the variety of ways IoT can benefit your organizations (loss prevention, claims management, customer engagement, etc.) is key to building an effective business case in this early stage. Getting involved with the IoT ecosystem now will both better prepare your organization for the market evolution as well as help ensure that your needs are well represented as the technology matures. With the above benefits in mind, I hope you agree that claims management may prove to be one of the pillars of the long-term IoT business case.

David Wechsler

Profile picture for user DavidWechsler

David Wechsler

David Wechsler has spent the majority of his career in emerging tech. He recently joined Comcast Xfinity, focused on helping drive the adoption of Internet of Things (IoT), in particular with insurance, energy and smart home/home automation.

How to Predict Atlantic Hurricanes

2018 should see smaller economic loss from landfalling hurricanes, but there is a potential game-changer for longer time horizons.

||||
The 2017 Atlantic hurricane season was remarkable, including five landfalls of Category 5 storms in the Caribbean Basin and three Category 4 strikes on the U.S. coastline. The 2017 landfalls cost hundreds of lives and record-breaking economic losses, exceeding $250 billion. These losses are sober reminders of hurricane vulnerability and the importance of hurricane prediction for public safety and the management of insurance and other economic risks. Hurricane forecasts have continued to improve in recent years, but they are not yet as good as they could be. Continued advances in weather and climate computer models used to make forecasts and improved observations from satellites and aircraft are driving these improvements. Also essential to progress are advances in understanding of weather and climate dynamics. Short-term track and wind intensity forecasts The National Hurricane Center (NHC) provides five-day forecasts of hurricane tracks and wind intensity that guide emergency management. Technological improvements from higher resolution weather forecast models and improved satellite observations are helping improve hurricane forecasts. The figure below shows the improvement over several decades in the NHC’s forecasted location of storms, referred to as “track error.” See figure 1 below. An average track error at 48 hours of about 50 nautical miles is impressive in a meteorological context. However, a track uncertainty of just 50 nautical miles for Hurricane Irma’s predicted Florida landfall in 2017 meant the difference between a costly Miami landfall or a relatively benign Everglades landfall. As seen in the figure above, we are approaching a track forecast accuracy limit for one to two days, arising from the inherent unpredictability of weather. Over the past decade, the greatest improvements have been in the three- to five-day track forecasts. A recent analysis conducted by Climate Forecast Applications Network (CFAN) compared track errors from different global and regional weather forecast models, all of which are considered by the NHC in preparing its forecast. The European model, operated by the European Center for Medium Range Weather Forecasting (ECMWF) and supported by 22 European countries, consistently outperformed the U.S. models maintained by the National Oceanic and Atmospheric Administration (NOAA) for track forecasts beyond two days. At five days lead time, the ECMWF model had an average track error for the 2017 season of 120 nautical miles, compared with 148 nm for the official NHC forecasts. Innovators in the private sector apply proprietary algorithms to improve upon the NOAA and ECMWF model forecasts. At CFAN, ECMWF forecasts are corrected for model biases based on historical track errors. For 2017, CFAN’s bias-corrected storm tracks resulted in five-day average track error of 114 nautical miles – 26% lower than the average track error for the official NHC forecast. Forecasts beyond five days (120 hours) are becoming increasingly important to the insurance community, especially with the development of insurance-linked securities and catastrophe bonds. The superior global weather forecasts provided by the European model (ECMWF) produced Atlantic hurricane tracks for 2017 with an average track error of 200 nautical miles out to eight days in advance. The proprietary track calibrations and synthetic tracks produced by CFAN from the European model maintain an average track error of 200 nautical miles even beyond 10 days, for the longest-lived storms. Forecasting of storm wind intensity (as measured by maximum sustained wind speed) is also of key importance. The NHC’s intensity forecasts are slowly improving – the NHC’s intensity forecast errors at time horizons of two to five days average from 10 to 20 knots (10 knots = 11.5 mph) over the past several years. The greatest challenge in short-term hurricane forecasting remains the prediction of rapid intensification, as occurred with Hurricane Harvey in August 2017, immediately before landfall. The NHC has invested considerable resources in the development of high-resolution regional models to improve prediction of hurricane intensity. The prediction of rapid intensification remains elusive, although there is considerable research underway on this topic. Seasonal and longer-term forecasts Advances have been made in forecasting the probability of track locations on weekly timescales out to a month in advance. Monthly forecasts based on global weather forecast models are provided by several private sector weather forecast providers. Beyond the timescale of about a month, however, global models show little skill in predicting hurricanes. Hence, most seasonal forecasting efforts, particularly beyond timescales of six months, focus on data-driven statistical methods that examine longer-term trends in the global atmosphere. Sea surface temperatures in the Atlantic and the tropical Pacific are key predictors for seasonal forecasts of Atlantic hurricane activity. El Niño (warmer tropical Pacific sea surface temperatures) and La Niña (cooler tropical Pacific sea surface temperatures) patterns have a strong influence, with La Niña being associated with higher levels of Atlantic hurricane activity. Atmospheric circulation patterns also have some long-term “memory” that is useful for seasonal forecasts. CFAN’s research has identified additional predictors of seasonal Atlantic hurricane activity through examination of global and regional interactions among circulations in the ocean and in the lower and upper atmosphere. The predictors are identified through data mining, interpreted in the context of climate dynamics analysis, and then subjected to statistical tests in historical forecasts. See also: Hurricane Harvey’s Lesson for Insurtechs   While forecasts issued around June 1 for the coming season generally have skill that is better than climatology, different outcomes are suggested by late May/early June forecasts for the 2018 Atlantic hurricane season. Predictions range from low activity (CFAN and Tropical Storm Risk ) to average activity (Colorado State University ) to near or above normal activity (NOAA Climate Prediction Center ). While many of the late May/early June 2017 forecasts predicted an above-normal season, none of the publicly reported forecasts predicted the extreme activity that was observed during the 2017 season. At the longer forecast horizons, forecast skill is increasingly diminished. The greatest challenge in making seasonal forecasts in April and earlier is the springtime "predictability barrier" for El Niño/La Niña, whereby random spring weather events in the tropical Pacific Ocean during spring can determine its longer seasonal evolution. Seasonal forecasts from the latest version of the European model show substantially improved forecast skill of El Niño and La Niña across the spring predictability barrier, which improves the prospects for seasonal hurricane forecasts issued in late March/early April. La Niña generally heralds an active hurricane season, whereas El Niño is generally associated with a weak hurricane season. However, the occurrence of El Niño or La Niña accounts for only about half of year-to-year variation in Atlantic hurricane activity. In particular, the extremely active years such as 2017, 2004 and 2005 were not characterized by much of a signal from La Niña. The greatest challenge for seasonal predictions of hurricane activity is to forecast the possibility of an extremely active hurricane season such as observed during 2017, 2005, 2004 and 1995. CFAN’s seasonal forecast models capture the extremes in 1995 and 2017 but not 2004 and 2005. Improved understanding of the causes of the extreme activity during 2004 and 2005 is an active area of research. The most important target of seasonal forecasts is the number of landfalling hurricanes and their likely locations. The number of U.S. landfalling hurricanes in one year has varied from zero to six since 1920. The number of landfalling hurricanes is only moderately correlated with overall seasonal activity. It is notable that the period of overall elevated hurricane activity from 1995 to 2014 overlapped with a historic 2006-2014 drought of major hurricane and Florida landfalls. Several seasonal forecasters provide a prediction of landfalls. These forecasts may specify the number of U.S. and Caribbean landfalls, the probability of a U.S. landfall (tropical storm, hurricane, major hurricane). Few forecast providers attempt to predict location of the landfalls. New research conducted by CFAN scientists has uncovered strong relationships between U.S. landfall totals and spring atmospheric circulation over the Arctic, which tends to precede summer dynamic conditions in the western North Atlantic and the Gulf of Mexico. Certain insurance contracts with hurricane exposure typically take effect Jan. 1 of each year, and for this reason there has been a desire for Atlantic hurricane forecasts to be issued in December for the following season. Such contracts often are written for a period of a year or even longer time horizons. Because of the apparent lack of hurricane predictability on this time scale, in December Colorado State University provides a qualitative forecast discussion, rather than a forecast. CFAN research has identified some sources of hurricane predictability on timescales from 12 to 48 months. Research is underway to exploit this predictability into skillful annual and inter-annual predictions of Atlantic hurricane activity. Five- to 10-year outlooks Some atmospheric modelers provide a five-year outlook of annual hurricane activity, focused on landfall frequency. A key element of such for such outlooks is the state of the Atlantic Multidecadal Oscillation (AMO). The AMO is an ocean circulation pattern and related Atlantic sea surface temperature that changes in 25- to 40-year periods with increased or suppressed hurricane activity. The year 1995 marked the transition to the warm phase of the AMO, which has been an active period for Atlantic hurricanes. In the warm phase of the AMO, sea surface temperatures in the tropical Atlantic are anomalously warm compared with the rest of the global tropics. These conditions produce weaker vertical wind shear and a stronger West African monsoon system that are conducive to increased hurricane activity in the North Atlantic. There has been a great deal of uncertainty about the status of the AMO, complicated by the overall global warming trend. According to the AMO index produced by NOAA, the current positive (warm) AMO phase has not yet ended. In contrast, an alternative AMO definition, the standardized Klotzbach and Gray AMO Index, indicates the AMO has generally been in a negative (cooler) phase since 2014 – and May 2018 had the lowest value since 2015.
An intriguing development is underway in the Atlantic in 2018. The figure below shows sea surface temperature anomalies in the Atlantic for May. You see an arc of cold blue temperature anomalies extending from the equatorial Atlantic, up the coast of Africa and then in an east-west band just south of Greenland and Iceland. This pattern is referred to as the Atlantic ARC pattern. See figure 2 below. A time series of sea surface temperature anomalies in the ARC region since 1880, depicted below, shows that temperature changes occur in sharp shifts occurring in 1902, 1926, 1971 and 1995. On the bottom graph, the ARC temperatures show a precipitous drop over the past few months. Is this just a cool anomaly, similar to 2002? Or does this portend a shift to cool phase of the AMO? See figure 3 below. [caption id="attachment_32791" align="alignnone" width="375"] Figure 3. Top: ARC temperatures from 1880-2017. The black lines reflect the cold and warm regimes of the Atlantic Multidecadal Oscillation. Bottom: ARC temperatures from 1982 through June 2017.[/caption] Based on past shifts, a forthcoming shift to the cool phase of the AMO is expected to have profound impacts:
  • diminished Atlantic hurricane activity
  • increased U.S. rainfall
  • decreased rainfall over India and the Sahel region of Africa
  • shift in north Atlantic fish stocks
  • acceleration of sea level rise on northeast U.S. coast.
The figures below depict how the AMO has a substantial impact on Atlantic hurricanes. The top figure shows the time series of the number of major hurricanes since 1920. The warm phases of the AMO are shaded in yellow. There are substantially higher numbers of major hurricanes during the periods shaded in yellow. A similar effect of the AMO is seen on the Accumulated Cyclone Energy (ACE). Seasonal ACE is a measure of the overall activity of a hurricane season that accounts for the number, strength and duration of all of tropical storms in the season. See figure 4 below. These variations in Florida landfalls associated with changes in the AMO have had a substantial impact on development in Florida. The spate of hurricanes starting in 1926 killed the economic boom that started in 1920. Florida’s population and development accelerated in the 1970s, aided by a period of low hurricane activity. By contrast, the warm versus cool phase of the AMO has little impact on the frequency of U.S. landfalling hurricanes generally. However, the phase of the AMO has a substantial impact on Florida landfalls. During the previous cold phase, no season had more than one Florida landfall, while during the warm phase there have been multiple years with as many as three landfalls. A major hurricane striking Florida is more than twice as likely during the warm phase relative to the cool phase. New developments in decadal scale prediction are combining global climate model simulations with statistical models. Such predictions have shown improved skill relative to climatological and persistence forecasts on the decadal time scale. See also: Tornadoes: Can We Stop the Cycle?   2018 Atlantic hurricane season The recent tropic Pacific Ocean La Niña event is now over; the tropical Pacific is trending to neutral with an El Niño watch underway. Sea surface temperatures in the subtropical Atlantic are currently the lowest that have been seen since 1982. For the 2018 Atlantic hurricane season, many forecasters who predicted a normal or active season previously are now lowering their forecasts, considering the trend toward El Niño and the cool temperatures observed in the tropical Atlantic. Based on the overall expectations for low Atlantic hurricane activity in 2018, combined with forecasts of a U.S. landfall ranging from 50% to 100%, we can expect 2018 to be a year with smaller economic loss from landfalling hurricanes relative to the average. Looking at longer time horizons, there is a potential game-changer in play – a possible shift to the cold phase of the Atlantic Multidecadal Oscillation that would herald multiple decades of suppressed Atlantic hurricane activity that would have a substantial impact on reduced landfalls, particularly in Florida.

How Sharing Economy Is Reshaping Insurance

For users, the insurance protection afforded by a sharing platform is a key consideration, in addition to the earning potential on offer.

The sharing economy is an economic system based on the use of technology to share assets or services between parties (individuals or organizations). Participants in the sharing economy use it because it can provide a more flexible and affordable option than some other economic systems. In this way, the sharing economy makes goods and services available to those who would not otherwise be able to access them. Much has been discussed about how the fast-growing web of consumer-to-consumer transactions that is the largest component of the sharing economy presents new opportunities for the insurance industry. The consensus view among insurers is that this potential market is large, growing quickly and under-developed yet tricky to insure with traditional products as it blurs the boundaries between personal and commercial lines. In April 2018, Lloyd’s published Sharing risks, sharing rewards: Who should bear the risk in the sharing economy? The report contained the following key findings:
  • Consumers in the sharing economy expect to be protected from the risks of transacting
  • Consumers and sharing platforms have opposing views on who bears responsibility for this protection
  • There is a significant untapped market of potential sharers who would be more willing to participate if protected by insurance
Maturer platforms in the sharing economy have established risk management programs and are working in partnership with the insurance industry to develop them further. For the many smaller platforms that make up the vast majority of platforms by number, risk management is at an earlier stage of development. The insurance industry has an important role to play in supporting platforms of all stages of maturity. This study aims to promote dialogue between platforms and insurers and, building on the previous report, has systematically analyzed the sharing economy to understand where insurance can support the growth of the sharing economy while also broadening the geographic scope of research. This study, carried out by Lloyd’s, the world’s specialist insurance and reinsurance market, and Deloitte scanned the sharing economy for emerging insurance models, conducted a broad review of business and academic literature, surveyed 8,527 consumers across the U.S., China, Germany, France, the U.K. and the UAE, interviewed more than 20 subject matter experts, conducted a platform-only online questionnaire and held two workshops with representatives from sharing economy platforms, innovation experts and insurance practitioners. See also: The Need for Agile, Collaborative Leaders   The consumer survey data in this report is not an extension of Lloyd’s previous report as the sample, time period and questions were different. The objective of this report is twofold:
  • To provide sharing economy platforms with an overview of key risks and the insurance solutions available to mitigate them.
  • To help the insurance market further understand how this sector of the economy needs new insurance products and where the most compelling opportunities for product development are located.
In summary, this research found:
  • Sharing is widespread: Approximately 500 million people across the U.S., China, Germany, France, the U.K. and the UAE have shared assets/possessions or services in the past three years to earn a profit; more than 680 million in these markets consumed them in the same period.
  • Currently, a number of platforms have mechanisms to protect users, ranging from transaction-embedded insurance to guarantee schemes. For users, the protection afforded by a platform is a key consideration in addition to the earning potential on offer.
  • Our market scanning indicates that an increasing number of sharing economy platforms provide insurance to their users that is automatically embedded within each transaction, with 57% of adults who have sold services or lent products in the sharing economy in the past three years being insured by transaction-embedded or personally owned cover.
  • Of those selling services and sharing assets, 37% of home sharers took out or upgraded a buildings or contents policy prior to sharing, and 49% of ride sharers took out a new motor policy or upgraded an existing one. Among delivery drivers, the figure is 37%, and 20% of freelancers took out or upgraded liability insurance before providing their services.
  • In addition, our analysis of the consumer survey identified pockets of high demand for insurance among four specific consumer segments. These groups represent product development opportunities for insurers, brokers and other service providers.
  • This study has identified numerous emerging models of sharing economy insurance; some combine elements of well-established commercial and retail covers in a static policy, and others provide more dynamic cover that fluctuates more in line with underlying risks.
  • Partnerships with sharing economy platforms form a key distribution channel. In addition to offering an opportunity to reduce customer friction in the insurance purchase process by embedding it within transactions, distribution via platforms offers greater potential for customer access, risk selection and pricing power than distribution via the open market.
  • Insurtechs are at the forefront of innovating sharing economy products and services and to date have focused on customer-facing links in the value chain.
You can find the full report here. This article was written by Nigel Walsh and Peter Evans.

Nigel Walsh

Profile picture for user NigelWalsh

Nigel Walsh

Nigel Walsh is a partner at Deloitte and host of the InsurTech Insider podcast. He is on a mission to make insurance lovable.

He spends his days:

Supporting startups. Creating communities. Building MGAs. Scouting new startups. Writing papers. Creating partnerships. Understanding the future of insurance. Deploying robots. Co-hosting podcasts. Creating propositions. Connecting people. Supporting projects in London, New York and Dublin. Building a global team.

What Is Really Disrupting Insurance?

A long and detailed research initiative earlier this year found disruption very hard to pull off -- but saw that Munich Re was succeeding.

In a recent SMA blog, Karen Furtado, SMA partner, posed this question: “Have you ever found yourself hearing a word so frequently that it begins to lose its meaning?” The word she was referring to was “transformation,” but I think that everyone who reads that question immediately has a specific word of their own that pops into their heads. For me, it is the word “disruption” or any iteration of it – disrupt, disruptor, disruptive, etc. In the insurtech movement, those “disruption” words surface again and again. At SMA, we love the word “transformation.” It’s this year’s umbrella theme of the annual SMA Summit. We spend a significant amount of time with our insurer customers helping them with transformation strategies. We help our technology customers dig deep into their transformation messaging and outcomes. But disruption – well, not so much! SMA believes that it is very difficult to truly disrupt the insurance industry (even though there is a tendency among some people to throw the word about with unwarranted abandon). So, you can imagine how surprising it was that, after a long and detailed research initiative earlier this year, we actually hung the “disruptor” tag on Munich Re! See also: How Analytics Can Disrupt Work Comp   The goal of the research was to analyze annual reports, quarterly analyst statements, magazine articles and public presentations to gain insight into, and maybe some best practices from, the innovation and transformation journeys of some of the largest insurers – Munich Re among them. SMA’s recent research brief, Who Is Really Disrupting the Insurance Industry? And What You Can Learn from Munich Re’s Journey, reviews the findings. There are many lessons to be learned from their journey, but three things in particular resonate:
  • Munich Re did not let traditional reinsurance roles place rails around their innovation strategies and tactics. For example, they worked with a broker (Marsh) to develop a pandemic product. Neither of the participants is a traditional player in the product development process.
  • Munich Re has stayed true to its heritage and traditional competencies of risk knowledge and risk management but approached change through a new lens of innovation and brave technology exuberance. We also saw this with Chubb as it has stayed true to its deep underwriting heritage in its innovation strategies.
  • Business units are focused on specific innovation and emerging technology initiatives. They have not cordoned off these responsibilities within IT or stand-alone innovation organizations. Business is an active force, not simply a recipient of innovation outcomes.
It's a bit surprising and even inspiring that a reinsurer is an industry disruptor. Insurers need to study innovation and transformation activities at all industry levels because the traditional (or hyped) competitors may not be the ones that are changing the industry landscape. See also: Innovation: ‘Where Do We Start?’ To be clear, Munich Re is not the only reinsurer that is on an innovation and transformation path. Others are, as well, most notably Swiss Re. However, early on, Munich Re noted unprecedented external forces emerging and, rather than reverting to traditional and frequently successful strategies, the company boldly placed new lenses on business challenges. And the results are – and continue to be – disruptive. (There, I said that word again!)

Karen Pauli

Profile picture for user KarenPauli

Karen Pauli

Karen Pauli is a former principal at SMA. She has comprehensive knowledge about how technology can drive improved results, innovation and transformation. She has worked with insurers and technology providers to reimagine processes and procedures to change business outcomes and support evolving business models.

Industry 4.0: What It Means for Insurance

Industry 4.0 will create enormous opportunities but also new risks, such as cyber, failure of critical infrastructure and uncorrelated effects.

|
The topic of Industry 4.0 has been discussed at many conferences in recent times. When you talk to participants and colleagues, you quickly realize that everyone associates this buzz phrase with something different. To make matters worse, the term is now used in almost every industry as a synonym for the digitized, automated and connected world, also known as the “smart factory.” This article discusses the term Industry 4.0 and examines its impact on property insurance. Industry 4.0 is a new level of organization and control over the entire value chain of a product – from idea and design, to flexible production of customized products and delivery to the customer. Customers and business partners are directly involved in the processes. The term Industry 4.0 is synonymous with a range of available automation, data exchange and manufacturing technologies to increase production flexibility and efficiency/profitability and to advance the value chain conceptually in industrial production and manufacturing. The basic principle is the intelligent networking of machines, workpieces and systems as well as all other business processes along the entire value chain, in which everything is regulated and controlled independently. The ultimate vision of Industry 4.0 is to create an intelligent factory in which all production and business units, machines and devices communicate with each other – as much as possible without human intervention, but involving both employees and external suppliers. It should not be forgotten that the term Industry 4.0 is used synonymously for digitized production with the ultimate goal of increasing production at significantly lower costs. Design principles The design principles of Industry 4.0 can be summarized as follows:
    • Networking/interaction: Machines, devices, sensors and people can network with each other and communicate via the Internet of Things, or the Internet of People.
    • Information transparency: Sensor data extend information systems of digital factory models to create a virtual image of the real world.
    • Decentralization: Cyber-physical systems are able to make independent decisions.
    • Real-time decisions: Cyber-physical systems are able to collect and evaluate information and translate it directly into decisions.
    • Service orientation: Products and services (of cyber-physical systems, people or smart factories) are offered via the Internet.
    • Modularity: Smart factories adapt flexibly to changing requirements by exchanging or extending individual modules.
See also: The Unicorn Hiding in Plain Sight   Challenges for Industry 4.0 Although the goals of Industry 4.0 sound promising, a number of challenges remain to be resolved, including:
    • Availability of relevant information in real time through connectivity of all entities involved in the value chain
    • Reliability and stability for critical machine-to-machine (M2M) communication, including very short and stable latency (real time)
    • Progress in network technology toward real-time actions
    • Need to maintain the integrity of production processes
    • Increased vulnerability of the supply chain
    • IT security problems
    • Data, network, cyber and device security, etc.
    • Need to avoid unexpected IT errors that can lead to production downtime
    • Protection of industrial know-how
    • Lack of adequate skills to drive the Industry 4.0 revolution
    • Threat of redundancy problems in the IT department
    • Ethical and social impact on society – what would be the impact if a machine were to override the human decision
Challenges for the insurance industry The insurance industry will continue its interest in collecting data and information for underwriting, and preparing and evaluating it by linking new algorithms and artificial intelligence principles. For example, information collected at the operating and machine level could help to identify certain patterns and predict when maintenance work or servicing is required or when a machine is nearing the end of its life. This allows a more detailed assessment of the actual exposure, which in turn can have an impact on all business areas of the insurance industry – so that the insurance principles might have to be redefined accordingly. In the future, a claim will affect several lines of business simultaneously, which will often make it difficult to identify a person liable for a loss and to assign the loss to a line of business; this in turn will ultimately complicate claims settlements. The probability of business interruption losses – caused by fire or natural catastrophe, for example – will increase due to the virtualized value chain that is the result of the optimization of systems and their dependency on the environment or on suppliers, customers, energy supply, etc. Ultimately, this could lead to a significant extension of the recovery period following a loss event, which will, in particular, be a consequence of the search for causes, the substitution of destroyed machines, plants, networks and communication channels. As a further consequence, the complexity of the linked systems and technologies will also result in exposures not yet known, with serious but also unexpected outcomes. For example, a cyber attack or security failure could lead to an interruption of production/supply, whereby cascade effects can ultimately even lead to a complete collapse of the entire value chain. For the insurance industry, the outcome of such an event could be comparable to current losses from natural catastrophes or a pandemic event. The problem is that industry and insurers generally have little, if any, experience with the real but intangible and difficult to quantify risks arising from the networking and automation of business processes. Options for insurers The economy is doing everything it can to make Industry 4.0 a reality as quickly as possible. One example is the Mindsphere initiative launched by Siemens, a cloud-based open IoT operating system that can already be used today by the companies involved. It was developed for three purposes:
    • To simulate plant and machine behavior before conversion and modernization
    • To monitor machines set up at customers’ businesses
    • To compare production, quality and maintenance data with other machines, and thus increase efficiency and the ability to identify problems – for example, imminent defects – so that repairs can be carried out early and a prolonged production downtime can be prevented
Currently under discussion is the extent to which the insurance products available today in the property and liability lines of business offer sufficient cover for this concept. As Industry 4.0 is controlled via networks and data streams, protection against cyber attacks will certainly be taken increasingly into account in the current coverage concepts. In addition, however, new risks will arise with integrative and automated production, and new insurance solutions will have to be developed to cover these risks. The use of the new technologies will result in new and different liability scenarios for all market participants. One of the difficulties will be to determine, for example, what caused the damage and who could be held liable. In other words, is there insurance cover for a specific loss and, if so, under which insurance policy? In this respect, it is necessary for the insurance and reinsurance industries to address the topic of Industry 4.0 at an early stage and to support policyholders in the implementation of their Industry 4.0 concepts – to recognize the associated changes in risks and their implications for the liability and property insurance cover. To establish the insurance industry as an important know-how carrier and partner for the respective policyholder, a discussion with policyholders must be conducted as a matter of urgency regarding potential risk scenarios and possible protective measures. Furthermore, insurers should support the industry from the outset in the development of necessary protection and prevention measures – such as predictive maintenance, defense against cyber attacks, business continuity plans and measures against the failure of critical infrastructures – to identify and avert potential risks before their manifestation so that a possible loss can be avoided (i.e., preventive risk management). In addition, the insurance industry should promote the development of its own concepts for the analysis and assessment of new risks, including:
    • Turning away from burning cost toward risk models
    • Developing new loss prevention measures
    • Developing artificial intelligence
    • Introducing more extensive data analyses and forecast models to mitigate losses before they occur
The use of big data/IoT technologies can, for example, help insurers identify new risks and, if necessary, develop appropriate insurance solutions. This will include the development of new insurance products that meet both the challenges and exposures as well as the loss prevention and mitigation measures of policyholders, e.g. model terms and conditions for an Industry 4.0 all risks policy. Ultimately, the decisive element will be development of new ways to cope with accumulation scenarios by Industry 4.0 loss events with the focus on major losses. In addition, the internal and external business processes of insurance companies (keyword: digitization) will be affected, for instance, in the areas of communication, transparency, claims handling, preparation of proposals, etc. See also: 3 Major Areas of Opportunity   Conclusion If Industry 4.0 is implemented as planned, it will lead to a revolution in existing business processes that will also affect the insurance industry, which will need to adapt both its processes and current insurance products. In accordance with the promoted goals, Industry 4.0 can create an enormous added value, especially for industrial companies and not least for our global economy and society. It will be accompanied by the generation of enormous data streams that can be evaluated and used for resource-efficient and high-quality production. Ultimately, it will affect our well-known world of manufacturing and selling products and finally our whole lives. However, this concept will also entail new risks, such as cyber, data protection, failure of critical infrastructure and uncorrelated effects. Industry 4.0 will change the insurance industry as a whole and our currently well-known and widely used strategies for defining risks, insurance, underwriting exposures and insurance products. This means that the Industry 4.0 concept will also be a revolution for the insurance sector. This requires those in the insurance industry to follow developments and inherent changes in the industry as closely as possible and to adapt current insurance products to the new realities. In this respect, one can ultimately speak of today’s insurance industry as moving toward an Insurance Industry 4.0.

First, You Must Define the Problem

As Albert Einstein said, for every five minutes you spend on solutions to a problem, first spend 55 minutes defining it.

"Houston, we’ve had a problem here." John Swigert’s famous words were delivered in a voice as calm and clear as the mountain air in his native Denver, but to the Apollo 13 mission controllers thousands of miles below in Texas this fired the starting gun in a race against time. At 2am on April 14, 1970, an explosion in the main oxygen tanks and the failure of a major part of the electrical system suddenly put the Apollo crew’s lives at risk. The extreme conditions in which they had to work to repair it required rapid creative thinking on the part of a large group of people on the ground and aboard the ship itself. As the drama unfolded, the whole world watched, holding its breath. This wasn’t a simple case of pulling a ready-made solution off the shelf; instead it required exploring the nature and dimensions of the problem, redefining and shaping it. Only then did the solution route become apparent, emerging gradually as a direction worth traveling in. Let’s move to a different world and try, for a moment, to climb inside the mind of an artist trying to come up with something novel and of artistic value. This was what researchers Mihaly Csikszentmihalyi and Jacob Getzels did in 1971 working with a group of art students. They gave them a table on which there were around 30 objects and observed them as they carried out the task of constructing a still life composition from them. The results were powerful. These artists didn’t simply place the objects randomly and start to paint; instead they explored, arranged and rearranged, selected and abandoned. They took time climbing around the challenge they had been given, exploring it carefully before finally embarking on their particular journey through the landscape offered by the resources table. See also: Don’t Just Indulge in “Innovation Theater”   When the work was evaluated by a group of experienced professors, there was a clear link between the quality of what they produced and the amount of time and effort they had spent in this exploration stage. Csikszentmihalyi and Getzels called this "discovery orientation." Perhaps of even more importance was that when the researchers followed up their subjects later in their working lives as artists they found that their successful performance (now being evaluated by people buying their creations or galleries selecting their work for exhibition) was closely predicted by the approach they had shown in this early experiment. Problem exploration is correlated with high-quality creativity. So what we do in the early stages of working with a problem matters. We may not even be aware that it is a problem – as the sculptor Henry Moore observed: "I sometimes begin drawing with no preconceived problem to solve, with only a desire to use pencil on paper and only make lines, tones, and styles with no conscious aim. But as my mind takes in what is so produced, a point arrives where some idea becomes conscious and crystallizes, and then control and ordering begin to take place." Sometimes the problem can be like a toothache, nagging away in the background but not drawing our full attention. James Dyson wasn’t the first person to be frustrated at the inability of his vacuum cleaner to pick up all the dust and the need to keep changing the bag. But eventually something snapped in his engineer’s mind, and he took the recalcitrant machine to his workshop to try to improve things. Crawling around the apparent problem of an inefficient filtering mechanism – the particles of dust blocked the pores of the bag and quickly reduced its effective suction – he suddenly had a flash of insight. The actual problem wasn’t one of a better bag but whether you needed a bag at all. What if you could make a cleaner with no bag, using a different way of separating out the dust from the air being sucked through? What Dyson, the Apollo team and countless other innovators recognize is that it’s worth differentiating between simply recognizing that there is a problem to solve (problem identification) and how to make the problem and workable (problem definition and redefinition). The influential psychologist David Kolb suggested that the process is a little like watching a detective at work, “gathering clues and information about how the crime was committed, organizing those clues into a scenario of ‘who done it’ and using that scenario to gather more information to prove or disprove the original hunch.” Part of the challenge here is that unconscious processes play a part; problem finding is often linked to moments of insight or intuition. We catch a glimpse of something about the problem that triggers an aha! moment, a new way of looking at the problem. This idea was first advanced in 1926 by Graham Wallas, who argued that creative problem-solving involved an element of unconscious incubation that was often characterized by a sense of "intimation" as we became consciously aware of the insight that our brains had found. A team led by Gordon Bower at Stanford University began developing a theory around incubation and suggested that this isn’t simply a flash of inspiration – there’s an underlying process going on that involves two stages:
  • A guiding stage, where coherence or structure is unconsciously recognized and used.
  • An integrative stage, where coherence makes it way to consciousness.
Have you ever gone to bed feeling annoyed and agitated because you’re wrestling with a problem? Finally, you give up on it and turn out the lights – and then you wake next morning and there’s a solution staring you in the face. It sits there, and you’re not only pleased that it has arrived but also have a clear sense of aha! – you know it’s the right answer. What you experience is the transition between the two stages in Bower’s model. So, insight is not luck but the sum of various behaviors going on before that moment. And this fits with current neuroscience – for example, Jacob Hohwy’s idea of the predictive mind, which suggests that when we meet a problem our brain produces candidate solutions for this using models based on experience. We may have to modify the models to suit the particular circumstances of our current problem – it’s not always a simple matter of plug and play – but essentially we reapply templates that we already have on file. In other words, we begin the problem search by looking for something that we recognize – problem recognition. But if we can’t find a match, we begin a secondary activity, which is searching for new solutions to the new situation – essentially reframing, rethinking the problem. What does all of this mean for our approaches to innovation? Clearly, if we’re moving along an established trajectory, doing what we do but better – the Mk2 or 3 version of our product, the latest update to our software, the improvement to our established process – then such incremental innovation will draw extensively on what we already have on our mental shelves. But if we’re looking for a radical solution, trying to break out of the box, then we need to look at how we might help with the task of exploration toward a new solution. There are some useful strategies to help us develop "discovery orientation," for example:
  • Trust your gut – intuition is a powerful clue to emerging directions of interest, and it isn’t always possible to explain why you feel this is a direction worth exploring. Research on product developers looking for new technologies found that experienced engineers recognized the value of this and followed up on their early hunches about interesting directions to follow.
  • Prototype to explore – one of the powerful reasons behind the use of prototypes and minimal viable products is that they offer boundary objects that make very early ideas available for others to play with and explore. Their value is that they are not the final answer, may even be a long way from it, but the process of exploring and interacting can lead to key insights around which the innovation can pivot.
  • Look with new eyes – get fresh insights and questions from people unfamiliar with the problem being identified. By definition, they know nothing about it so they will ask different questions, sometimes naïve but sometimes cutting through to reveal a novel insight.
  • Broadcast search – much of the growing evidence around crowdsourcing of ideas for innovation is that by broadcasting a challenge widely we will pick up increasingly diverse perspectives on a problem. The power of innovation markets like Innocentive.com is less that they function as a replacement for our own thinking than that they allow us to access very different ways of looking at our problem. In their studies of the 250,000 regular solvers working on that platform, Lakhani and Jeppesen found it was this diversity that helped lead to novel solutions.
  • Construct crisis – under extreme conditions, conventional off-the-shelf approaches may not be suitable, and we need to explore radical alternatives. For example, the origins of "lean" thinking – an approach to process innovation that has literally changed the world – emerged out of a crisis in resources. Post-war Japanese manufacturers were forced to rethink the entire production process and come up with a low-waste alternative – they had to explore and reframe what they were doing. Lockheed’s famous Skunkworks managed to develop a jet fighter aircraft within six months from a zero base and without even having a working jet engine to experiment with. A growing body of research suggests that creating crisis conditions can be a spur to novel insights and valuable new directions for problem-solving.
See also: How ‘Not Invented Here’ Limits Innovation   In the end, it comes back to the idea of problem exploration, playing with the possibilities just like the artists did back in the early research on discovery orientation. It’s a principle that Albert Einstein understood rather well – and his words remain a useful source of advice for would-be innovators: “If I had an hour to solve a problem, I’d spend 55 minutes thinking about the problem and five minutes thinking about solutions.”

John Bessant

Profile picture for user JohnBessant

John Bessant

John Bessant holds the chair in innovation and entrepreneurship at the University of Exeter and has visiting appointments at the universities of Erlangen-Nuremburg and Queensland University of Technology.

Look out for the backlash(es)

sixthings

With technology, there's always a backlash. Margaret Atwood, author of the dystopian "Handmaid's Tale," says, "With all technology, there is a good side, a bad side and a stupid side that you weren't expecting." Proponents of a technology can always paint a glowing picture of the good side and somehow get us to not even try to imagine the bad side—then a bad side does, in fact, appear, and the stupid side sneaks up on all us. Backlash ensues.

Her observation popped to mind because of this article in the New York Times last week on the disillusionment settling in among Uber drivers in New York, suggesting that the sharing economy may not be all it's cracked up to be. I've actually been waiting for this backlash for a while, ever since a friend remarked to me about a year ago that Uber is "financed by drivers who don't understand depreciation." They track the hours they spend, plus the cost of gasoline and maybe oil changes, but don't account for the fact that every mile they drive takes something out of the value of the car. As a result, the drivers are essentially making a gift to Uber of the depreciation. The Times article does find disenchantment about the economics of the gig economy, plus even deeper concerns; Uber et al. sold drivers on the idea that they could work in their spare time and finance their dreams, but those careers as, say, recording artists are nowhere to be found. 

The idea of a backlash against tech probably isn't too hard a sell these days. We've all seen Facebook, Twitter and other social media have to scramble to defend themselves against charges that they are facilitating racism, the hacking of American elections and just about any other bad thing you can imagine. Elon Musk—Mr. Ironman himself—has found that not every rocket makes it to Mars and is facing a backlash.

I wish there were some easy lesson I could offer about how to anticipate the backlash against whatever technology you're exploring.

History can sometimes be a guide to understanding technology. In the early days of the commercial internet, some idealists envisioned world peace because all this open communication would lead to understanding and a global community. (No, I'm serious.) But historians pointed back to the early days of the telegraph in the mid-1800s, when similar hopes rose because you could now exchange messages between world capitals in minutes, not weeks or months, and could remove misunderstandings. Those hopes were, of course dashed. (The parallels turned out to be so strong that someone wrote a book on the telegraph whose title was "The Victorian Internet." Telegraph operators actually had the first chat rooms; when lines were idle, they'd gossip with each other up and down the lines.) eBay showed us that markets that theoretically operate with no friction don't actually get to operate in the theoretical world. They have to deal with the real world, one in which all kinds of friction can occur, especially when parties don't know each other and have to find a way to trust each other. Other examples can surely be found related to privacy issues, to the insights that are (and aren't) to be found in data, and so on.

But the only real defense I know is to be jaded: Look for the bad side of a technology even when the evangelist has all your attention focused on the good side. And be ready to duck fast when the stupid side shows up. It will. 

Have a great week anyway.

Paul Carroll
Editor-in-Chief


Paul Carroll

Profile picture for user PaulCarroll

Paul Carroll

Paul Carroll is the editor-in-chief of Insurance Thought Leadership.

He is also co-author of A Brief History of a Perfect Future: Inventing the Future We Can Proudly Leave Our Kids by 2050 and Billion Dollar Lessons: What You Can Learn From the Most Inexcusable Business Failures of the Last 25 Years and the author of a best-seller on IBM, published in 1993.

Carroll spent 17 years at the Wall Street Journal as an editor and reporter; he was nominated twice for the Pulitzer Prize. He later was a finalist for a National Magazine Award.