Download

End of the Road for OBD in UBI Plans?

In theory, there are benefits from reading vehicle data and being connected to the car, but the reality has proven massively different.

I recently attended a telematics event in Brussels and had an interesting discussion about the future of on-board diagnostics (OBD) in auto insurance. I have been in the European telematics usage-based insurance (UBI) space for a long time and have seen all sorts of solutions adopted by insurers when launching programs to consumers: hidden black boxes, windscreen devices, battery-mounted devices and tags, all with different types of success. I have rarely seen OBDs succeed. In theory, there are benefits from reading vehicle data and being connected to the car, but the reality has proven massively different. First of all, OBDs prove to be inconvenient for consumers. Each vehicle has a different position for the port, and unless consumers are carefully guided they simply won’t find it. If they do, the ports can be in inconvenient places, which either makes the device an eyesore in the car or annoying because it can detach when the driver gets into and out of the car. Some less-expensive OBD models, without GPS and GSM, can be paired with phones, but even this experience has never been straightforward due to different Bluetooth standards. So the promise of self-installing really did not work out. Car manufacturers don’t help the situation. They continuously update their vehicle software, which can cause compatibility problems for OBD makers every time a new model comes to market. Guess who discovers this first? Consumers. OBDs proved to be inconvenient for insurers. When insurers launch a new UBI program, they want to make sure the data is standardized across all available vehicles. But with all their issues with compatibility and installation, OBDs in Europe have never been able to deliver the standardization that make the driving data interesting for insurers on a large scale. See also: Advanced Telematics and AI OBDs have had some success in countries like the U.S., mainly due to different OBD data standards, bigger cars and more consumer awareness. But even in the U.S., insurers are abandoning OBDs for smartphones, which can provide better customer experiences and adoption rates. But perhaps most damaging of all, car makers are starting to limit access to the OBD port to protect consumers from hackers and bad experiences. Ultimately, the port has been created for diagnostics purposes years ago but lately used by hardware providers for different purposes. Organizations interested in accessing vehicle data will probably be driven by OEMs directly to access driving data from the cloud with highly secure access systems – not from the vehicle itself. This is why we won’t see many insurers launching new OBD-based UBI programs.

Private Options for Flood Insurance

Seven questions that simplify the complexities of flood insurance in the midst of regulatory changes and extreme weather events.

Congress has once again extended the current mandate for the National Flood Insurance Program (NFIP). If you have used the NFIP in the past to deliver flood insurance to clients, you know all too well that trying to simplify the complexities of flood insurance in the midst of regulatory changes and extreme weather events is an important yet arduous undertaking. The industry is filled with constant change, and people need alternatives. This series of guiding questions can help facilitate conversations with your clients so you can work together to thoughtfully explore all options for protecting their property and valued possessions. 7 Guiding Flood Insurance Questions Does your client need specialized flood insurance coverage? Consider flood insurance coverage in terms of the specifics of the property and the property owner. Is your client a landlord? Is your client on a fixed income? Is this person holding properties for income-generating purposes? By understanding the needs of your clients, you can more effectively navigate the suite of flood insurance options available today. Private flood insurance enables property owners to supplement the NFIP product today, providing coverage that homeowners expect from their homeowners policies for exposures such as outdoor property, detached structures, swimming pools and basements. See also: Future of Flood Insurance   Does your client have a finished basement or pool? The NFIP does not cover personal property in basements, so displaced homeowners or homeowners with built-out basements are responsible for these bills. If a storm surge dumps a ton of sand into your client’s pool, is your client prepared to shoulder the costs of the resulting clean-up? By understanding your client’s lifestyle and property usage, you can deliver meaningful solutions. Private options can help. Does your client’s property value exceed $250,000? The value of custom-built homes continues to increase, with replacement costs rising well above $250,000, the current limit on government-issued coverage. Now, owners of residential homes have options with higher coverage limits at affordable rates through private flood insurance programs. Would your client need assistance for additional living expenses if they experience flood damage? When weighing coverage options remember that the NFIP does not cover additional living expenses. With a private flood policy, your client can opt to add additional living expense coverage. This valuable coverage helps homeowners that have been displaced as a result of a flood by covering the costs of shelter and meals. Are your client’s personal belongings valued at more than $100,000? Consider your client’s property holdings beyond the physical structures she owns. For example, if your client is a landlord or holding income-generating properties, she typically doesn’t need contents coverage. However, some clients may need more coverage than what is available from the NFIP to protect their personal treasures. Would your clients prefer an easy application process without the hassle of submitting photographs or an elevation certificate? Speed of delivery and streamlined processes of today’s private flood insurance options are increasingly attractive to clients. Plus, property owners can often obtain a quote without an elevation certificate and without providing property photographs. See also: Time to Mandate Flood Insurance?   Would you like to save your clients money by avoiding federal surcharges or reserve fund assessments? Private products are not subject to federal surcharges or reserve fund assessments and may be less expensive to purchase than NFIP flood insurance. With the NFIP reauthorization debate continuing, Congress struggles to make flood insurance affordable and improve claims standards. Discussions continue around the development and delivery of dependable, disciplined, reliable private insurance to help more people protect their financial livelihood. Presenting private flood insurance options not only helps your clients make more informed decisions, it enhances the value you bring to your relationship as you work together to help them protect what matters most – their families, homes and treasured possessions. Today, private flood insurance is available in every state, through multiple channels and multiple locations. Companies have the capacity to step in and offer a suite of comprehensive private options for their clients. Private flood insurance is embedded into many brand name lenders that facilitate a loan closing and help Americans get into their dream home, without interruptions.

John Dickson

Profile picture for user JohnDickson

John Dickson

John Dickson is president and CEO of Aon Edge. In this role, Dickson oversees the delivery of primary, private flood insurance solutions as an alternative to federally backed flood insurance.

How to Innovate With Microservices (Part 3)

The microservices architecture solves a number of problems with legacy systems and can continue to evolve over time.

|
In Part 2 of this blog series, we shared how a microservices architecture is applicable for the insurance industry and how it can play a big role in insurance transformation. This is especially true because the insurance industry is moving to a platform economy, with heavy emphasis on the interoperability of capabilities across a diverse ecosystem of partners. In this segment, we will share our views on best practices for adopting a microservices architecture to build new applications and transform existing ones. Now that we have made a sufficient case exploring microservice architecture’s abilities to bring speed, scale and agility to IT operations, we should contemplate how we can best think about microservices. How can we transform existing monoliths into a microservices architecture? Although the approach for designing microservices may vary by organization, there are best practices and guidelines that can assist teams in the midst of making these decisions. How many microservices are too many? Going “too micro” is one of the biggest risks for organizations that are still new to microservices architectures. If a “thesis-driven” approach is adopted, there will be a tendency to build many smaller services. “Why not?” you may ask. “After all, once we buy into the approach, shouldn’t we just go ‘all in’?” We encourage insurers to be careful and test the waters. We would caution against starting out with too many smaller services, due to the increased complexity of mixed architectures, the steep curve of upfront design and the significant changes in development processes as well as a lack of DevOps preparedness. We suggest a “use-case-driven” approach. Focus on urgent problems, where rapid changes are needed by the business to overcome system-inhibiting issues, and break the monolith module into multiple microservices that will serve current needs and not necessarily finer microservices based on assumptions about future needs. Remember, if we can break the monolith into microservices, then later we can make microservices more granular as needed, instead of incurring the complexity of too many microservices without an assurance of future benefits. What are the constraints (lines of code, language, etc.) for designing better microservices? There is a lot of myth about the number of lines of code, programming languages and permissible frameworks (just to name a few) for designing better microservices. There is an argument that if we do not set fixed constraints on numbers of lines of code per microservice, then the service will eventually grow into a monolith. Although it is a valid thought, an arbitrary size limit on lines of code will create too many services and introduce costs and complexity. If microservices are good, will “nanoservices” be even better? Of course not. We must ensure that the costs of building and managing a microservice are less than the benefit it provides — hence, the size of a microservice should be determined by its business benefit instead of lines of code. Another advantage of a microservices architecture is the interoperability between microservices, regardless of underlying programming language and data structure. There is no one framework, programming language or database that is better-suited than another for building microservices. The choice of technology should be made based on underlying business benefits that a particular technology provides for accomplishing the purpose of microservices. Preparing for this kind of flexible framework will give insurers vital agility moving forward. See also: Are You Innovating in the Dark?   How do microservices affect development processes? A microservices architecture promotes small, incremental changes that can be deployed to production with confidence. Small changes can be deployed quickly and tested. Using a microservices architecture naturally leads to DevOps. The goal is to have better deployment quality, faster release frequency and improved process visibility. The increased frequency and pace of releases mean you can innovate and improve the product faster. Putting a DevOps pipeline with continuous integration and continuous deployment (CI/CD) into practice requires a great deal of automation. This requires developers to treat infrastructure as code and policy as code, shifting the operational concerns about managing infrastructure needs and compliance from production to development. It is also very important to implement real-time, continuous monitoring, alerting and assessment of the infrastructure and application. This will ensure that the rapid pace of the deployment remains reliable and promotes consistent, positive customer experiences. To validate that we are on right path, it is important to capture some matrices on the project. Some of the key performance indicators (KPIs) we like to look at are:
  • MTTR – The mean time to respond as measured from the time a defect was discovered until the correction was deployed in production.
  • Number of deploys to production – These are small, incremental changes being introduced into production through continuous deployment.
  • Deployment success rate – Only 0.001% of AWS deployments cause outages! When done properly, we should see a very high successful deployment ratio.
  • Time to first commit – This is the time it takes for a new person joining the team to release code to production. A shorter time indicates well-designed microservices that do not carry the steep learning curve of a monolith.
Principles for Identifying Microservices and Examples More important than the size of the microservice is the internal cohesion it must have, and its independence from other services. For that, we need to inspect the data and processing associated with the services. A microservice must own its domain data and logic. This leads to a domain-driven design pattern. However, it is often possible to have a complex domain model that can be better presented in interconnected multiple small models. For example, consider an insurance model, composed of multiple smaller models, where the party model can be used as claim-party and also as insured (and various others…). In such a multi-model scenario, it is important to first establish a context with the model called bounded context that closely governs the logic associated with the model. Defining the microservice for a bounded context is a good start, because they are closely related. Along with bounded context, aggregates that are used by the domain model that are loosely coupled and driven by business requirements are also good candidates for microservices, as long as they exhibit the main design tenets of the microservices; for example, services for managing vehicles as an aggregate of a policy object. While most microservices can be easily identified by following the domain model analysis, there are a number of cases where the business processing itself is stateless and does not result in a modification of the data model itself, for example, identifying the risk locations within the projected path of a hurricane. Such stateless business processes, which follow the single responsibility principle, are great candidates for microservices. If these principles are applied correctly, loosely coupled and independently deployable services will follow the single responsibility model without causing chattiness across the microservices. They can be versioned to allow client upgrades, provide fallback defaults and be developed by small teams. Co-Existing With Legacy Architecture Microservices provide a perfect tool for refactoring the legacy architecture. This can be done by applying the strangler pattern. This gives a new life to legacy applications by first moving the business functions that will benefit the most gradually as microservices. Applying this pattern requires a façade that can intercept the calls to the legacy application. A modern digital front end, which can offer a better UX and provides connectivity to a variety of backends by leveraging EIP, can be used as strangler façade to connect to existing legacy applications. Over time, those services can be built directly using a microservices architecture, by eliminating calls to legacy application. This approach is more suited to large, legacy applications. Within smaller systems that are not very complex, the insurer may be better off rewriting the application. How to make organizational changes to adopt microservices-driven development Adopting microservices-driven development requires a change in organization culture and mindset. The DevOps practice tries to shift siloed operation responsibilities to the development organization. With the successful introduction of microservices best practices, it is not uncommon for the developers to do both. Even when the two teams exist, they have to communicate frequently, increase efficiencies and improve the quality of services they provide to customers. The quality assurance, performance testing and security teams also need to be tightly integrated with the DevOps teams by automating their tasks in the continuous delivery process. See also: Who Is Innovating in Financial Services?   Organizations need to cultivate a culture of sharing responsibility, ownership and complete accountability in microservices teams. These teams need to have a complete view of the microservice from a functional, security and deployment infrastructure perspective, regardless of their stated roles. They take full ownership for their services, often beyond the scope of their roles or titles, by thinking about the end customer’s needs and how they can contribute to solving those needs. Embedding the operational skills within the delivery teams is important to reduce potential friction between the development and operations team. It is important to facilitate increased communication and collaboration across all the teams. This could include the use of instant messaging apps, issue management systems and wikis. This also helps other teams like sales and marketing, thus allowing the complete enterprise to align effectively toward project goals. As we have seen in these three blogs, the microservices architecture is an excellent solution to legacy transformation. It solves a number of problems and paves the path to a scalable, resilient system that can continue to evolve over time without becoming obsolete. It allows rapid innovation with positive customer experience. A successful implementation of the microservices architecture does, however, require:
  • A shift in organization culture, moving infrastructure operations to development teams while increasing compliance and security
  • Creation of a shared view of the system and promoting collaboration
  • Automation, to facilitate continuous integration and deployment
  • Continuous monitoring, alerting and assessment
  • A platform that can allow you to gradually move your existing monolith to microservices and also natively support domain-driven design
his article was written by Sachin Dhamane and Manish Shah.

Denise Garth

Profile picture for user DeniseGarth

Denise Garth

Denise Garth is senior vice president, strategic marketing, responsible for leading marketing, industry relations and innovation in support of Majesco's client-centric strategy.

Blockchain: Bad Tech, Worse Vision

Even the most prominent blockchain company, Ripple, doesn’t use blockchain in its product. You read that right.

Blockchain is not only lousy technology but a bad vision for the future. Its failure to achieve adoption to date is because systems built on trust, norms and institutions inherently function better than the type of no-need-for-trusted-parties systems blockchain envisions. That’s permanent: No matter how much blockchain improves, it is still headed in the wrong direction. This December, I wrote a widely circulated article on the inapplicability of blockchain to any actual problem. People objected mostly not to the technology argument, but, rather, hoped that decentralization could produce integrity. Let’s start with this: Venmo is a free service to transfer dollars, and bitcoin transfers are not free. Yet, after I wrote an article last December saying bitcoin had no use, someone responded that Venmo and Paypal are raking in consumers’ money, and people should switch to bitcoin. What a surreal contrast between blockchain’s non-usefulness/non-adoption and the conviction of its believers! It’s so entirely evident that this person didn’t become a bitcoin enthusiast because he was looking for a convenient, free way to transfer money from one person to another and discovered bitcoin. In fact, I would assert that there is no single person in existence who had a problem he wanted to solve, discovered that an available blockchain solution was the best way to solve it and therefore became a blockchain enthusiast. The number of retailers accepting cryptocurrency as a form of payment is declining, and its biggest corporate boosters, like IBM, NASDAQ, Fidelity, Swift and Walmart, have gone long on press but short on actual rollout. Even the most prominent blockchain company, Ripple, doesn’t use blockchain in its product. You read that right: The company Ripple decided the best way to move money across international borders was to not use Ripples. A blockchain is a literal technology, not a metaphor Why all the enthusiasm for something so useless in practice? People have made a number of implausible claims about the future of blockchain—like that you should use it for AI in place of the type of behavior-tracking that Google and Facebook do, for example. This is based on a misunderstanding of what a blockchain is. A blockchain isn’t an ethereal thing out there in the universe that you can “put” things into; it’s a specific data structure, a linear transaction log, typically replicated by computers whose owners (called miners) are rewarded for logging new transactions. There are two things that are cool about this particular data structure. One is that a change in any block invalidates every block after it, which means that you can’t tamper with historical transactions. The second is that you only get rewarded if you’re working on the same chain as everyone else, so each participant has an incentive to go with the consensus. The result is a shared definitive historical record. What’s more, because consensus is formed by each person acting in his own interest, adding a false transaction or working from a different history just means you’re not getting paid and everyone else is. Following the rules is mathematically enforced—no government or police force need come in and tell you the transaction you’ve logged is false (or extort bribes or bully the participants). It’s a powerful idea. So in summary, here’s what blockchain-the-technology is: “Let’s create a very long sequence of small files — each one containing a hash of the previous file, some new data and the answer to a difficult math problem — and divide up some money every hour among anyone willing to certify and store those files for us on their computers.” See also: How Insurance Can Exploit Blockchain Now, here’s what blockchain-the-metaphor is: “What if everyone keeps their records in a tamper-proof repository not owned by anyone?” An illustration of the difference: In 2006, Walmart launched a system to track its bananas and mangoes from field to store. In 2009, Walmart abandoned the system because of logistical problems getting everyone to enter the data, and in 2017 Walmart re-launched it (to much fanfare) on blockchain. If someone comes to you with “the mango-pickers don’t like doing data entry,” “I know: let’s create a very long sequence of small files, each one containing a hash of the previous file” is a nonsense answer, but “What if everyone keeps their records in a tamper-proof repository not owned by anyone?” at least addresses the right question! Blockchain-based trustworthiness falls apart in practice People treat blockchain as a “futuristic integrity wand”—wave a blockchain at the problem, and suddenly your data will be valid. For almost anything people want to be valid, blockchain has been proposed as a solution. It’s true that tampering with data stored on a blockchain is hard, but it’s false that blockchain is a good way to create data that has integrity. To understand why this is the case, let’s work from the practical to the theoretical. For example, let’s consider a widely proposed use case for blockchain: buying an e-book with a “smart” contract. The goal of the blockchain is, you don’t trust an e-book vendor, and the vendor doesn't trust you (because you’re just two individuals on the internet), but, because of blockchain, you’ll be able to trust the transaction. In the traditional system, once you pay you’re hoping you’ll receive the book, but once the vendor has your money the vendor doesn't have any incentive to deliver. You’re relying on Visa or Amazon or the government to make things fair—what a recipe for being a chump! In contrast, on a blockchain system, by executing the transaction as a record in a tamper-proof repository not owned by anyone, the transfer of money and digital product is automatic, atomic and direct, with no middleman needed to arbitrate the transaction, dictate terms and take a fat cut on the way. Isn’t that better for everybody? Hmm. Perhaps you are very skilled at writing software. When the novelist proposes the smart contract, you take an hour or two to make sure that the contract will withdraw only an amount of money equal to the agreed-upon price, and that the book — rather than some other file, or nothing at all — will actually arrive. Auditing software is hard! The most heavily scrutinized smart contract in history had a small bug that nobody noticed — that is, until someone did notice it and used it to steal $50 million. If cryptocurrency enthusiasts putting together a $150 million investment fund can’t properly audit the software, how confident are you in your e-book audit? Perhaps you would rather write your own counteroffer software contract, in case this e-book author has hidden a recursion bug in his version to drain your ethereum wallet of all your life savings? It’s a complicated way to buy a book! It’s not trustless; you’re trusting in the software (and your ability to defend yourself in a software-driven world), instead of trusting other people. Another example: the purported advantages for a voting system in a weakly governed country. “Keep your voting records in a tamper-proof repository not owned by anyone” sounds right — yet is your Afghan villager going to download the blockchain from a broadcast node and decrypt the Merkle root from his Linux command line to independently verify that his vote has been counted? Or will he rely on the mobile app of a trusted third party — like the nonprofit or open-source consortium administering the election or providing the software? These sound like stupid examples — novelists and villagers hiring e-bodyguard hackers to protect them from malicious customers and nonprofits whose clever smart-contracts might steal their money and votes?? — until you realize that’s actually the point. Instead of relying on trust or regulation, in the blockchain world, individuals are on-purpose responsible for their own security precautions. And if the software they use is malicious or buggy, they should have read the software more carefully. The entire worldview underlying blockchain is wrong You actually see it over and over again. Blockchain systems are supposed to be more trustworthy, but in fact they are the least trustworthy systems in the world. Today, in less than a decade, three successive top bitcoin exchanges have been hacked, another is accused of insider trading, the demonstration-project DAO smart contract got drained, crypto price swings are 10 times those of the world’s most mismanaged currencies and bitcoin, the “killer app” of crypto transparency, is almost certainly artificially propped up by fake transactions involving billions of literally imaginary dollars. Blockchain systems do not magically make the data in them accurate or the people entering the data trustworthy; they merely enable you to audit whether the chain has been tampered with. A person who sprayed pesticides on a mango can still enter onto a blockchain system that the mangoes were organic. A corrupt government can create a blockchain system to count the votes and just allocate an extra million addresses to cronies. An investment fund whose charter is written in software can still misallocate funds. How then, is trust created? In the case of buying an e-book, even if you’re buying it with a smart contract, instead of auditing the software you’ll rely on one of four things, each of them characteristics of the “old way”: Either the author of the smart contract is someone you know of and trust, the seller of the e-book has a reputation to uphold, you or friends of yours have bought e-books from this seller in the past successfully or you’re just willing to hope that this person will deal fairly. In each case, even if the transaction is effectuated via a smart contract, in practice you’re relying on trust of a counterparty or middleman, not your self-protective right to audit the software, each man an island unto himself. The contract still works, but the fact that the promise is written in auditable software rather than government-enforced English makes it less transparent, not more transparent. The same for the vote counting. Before blockchain can even get involved, you need to trust that voter registration is done fairly, that ballots are given only to eligible voters, that the votes are made anonymously rather than bought or intimidated, that the vote displayed by the balloting system is the same as the vote recorded and that no extra votes are given to the political cronies to cast. Blockchain makes none of these problems easier and many of them harder—more importantly, solving them in a blockchain context requires a set of awkward workarounds that undermine the core premise. So we know the entries are valid, let’s allow only trusted nonprofits to make entries—and you’re back at the good old “classic” ledger. In fact, if you look at any blockchain solution, inevitably you’ll find an awkward workaround to re-create trusted parties in a trustless world. A crypto-medieval system Yet absent these “old way” factors—supposing you actually attempted to rely on blockchain’s self-interest/self-protection to build a real system—you’d be in a real mess. Eight hundred years ago in Europe — with weak governments unable to enforce laws and trusted counterparties few, fragile and far between — theft was rampant, safe banking was a fantasy and personal security was at the point of the sword. This is what Somalia looks like now--and what it looks like to transact on the blockchain in the ideal scenario. Somalia on purpose. That’s the vision. Nobody wants it! Even the most die-hard crypto enthusiasts prefer in practice to rely on trust rather than their own crypto-medieval systems. 93% of bitcoins are mined by managed consortiums, yet none of the consortiums use smart contracts to manage payouts. Instead, they promise things like a “long history of stable and accurate payouts.” Sounds like a trustworthy middleman! See also: Collaborating for a Better Blockchain Same with Silk Road, a cryptocurrency-driven online drug bazaar. The key to Silk Road wasn’t the bitcoins (that was just to evade government detection), it was the reputation scores that allowed people to trust criminals. And the reputation scores weren’t tracked on a tamper-proof blockchain, they were tracked by a trusted middleman! If Ripple, Silk Road, Slush Pool and the DAO all prefer “old way” systems of creating and enforcing trust, it’s no wonder that the outside world had not adopted trustless systems either! In the name of all blockchain stands for, it’s time to abandon blockchain A decentralized, tamper-proof repository sounds like a great way to audit where your mango comes from, how fresh it is and whether it has been sprayed with pesticides. But actually, laws on food labeling, nonprofit or government inspectors, an independent, trusted free press, empowered workers who trust whistleblower protections, credible grocery stores, your local nonprofit farmer’s market and so on do a way better job. People who actually care about food safety do not adopt blockchain because trusted is better than trustless. Blockchain’s technology mess exposes its metaphor mess — a software engineer pointing out that storing the data as a sequence of small hashed files won’t get the mango pickers to accurately report whether they sprayed pesticides is also pointing out why peer-to-peer interaction with no regulations, norms, middlemen or trusted parties is actually a bad way to empower people. Like the farmer’s market or the organic labeling standard, so many real ideas are hiding in plain sight. Do you wish there was a type of financial institution that was secure and well-regulated in all the traditional ways, but also has the integrity of being people-powered? A credit union’s members elect its directors, and the transaction-processing revenue is divided up among the members. Move your money! Prefer a deflationary monetary policy? Central bankers are appointed by elected leaders. Want to make elections more secure and democratic? Help write open source voting software, go out and register voters or volunteer as an election observer here or abroad! Wish there was a trusted e-book delivery service that charged lower transaction fees and distributed more of the earnings to the authors? You can already consider stated payout rates when you buy music or books, buy directly from the authors or start your own e-book site that’s even better than what’s out there! Projects based on the elimination of trust have failed to capture customers’ interest because trust is actually so damn valuable. A lawless and mistrustful world where self-interest is the only principle and paranoia is the only source of safety is a not a paradise but a crypto-medieval hellhole. As a society, and as technologists and entrepreneurs in particular, we’re going to have to get good at cooperating — at building trust and at being trustworthy. Instead of directing resources to the elimination of trust, we should direct our resources to the creation of trust—whether we use a long series of sequentially hashed files as our storage medium or not.

Kai Stinchcombe

Profile picture for user KaiStinchcombe

Kai Stinchcombe

Kai Stinchcombe is cofounder and CEO of True Link Financial, a financial services firm focused on the diverse needs of today’s retirees: addressing longevity, long-term care costs, cognitive aging, investment, insurance and banking.

Using IoT to Monitor Risk in Real Time

Streaming of data readings allows operational predictions 20 times earlier. What if risk management saw similar improvements?

|
Although in ISO 31000, monitoring risk is a key tenet, I see little monitoring in most risk management systems. Periodic review, dashboards, heat maps and key risk indicator (KRI) reports are all review (a different ISO 31000 tenet), not monitoring. IoT technology can deliver real-time monitoring of risk for more than just physical environmental metrics. To monitor means to supervise and continually check and critically observe. It means to determine the current status and to assess whether the required or expected performance levels are actually being achieved. This is the fifth in the series on the Top 10 Disruptive Technologies that will transform risk management in the 2020s. This week, I look at how IoT technology can be extended to deliver real-time monitoring of risk for more than just physical environmental metrics. In my 2013 book “Mastering 21st Century Enterprise Risk Management,” I suggested “horizon scanning” as a method for monitoring risk and threats. With IoT, we have the opportunity to extend this from a series of discrete observations into continuous real-time monitoring. But let’s start with basics. What Is IoT – Intelligent Things? The IoT acronym for Internet of Things, like most IT acronyms, is meaningless, so it’s more recently being referred to as Intelligent Things, which is both more meaningful and allows for its expansion outside its original classification (I will come to that shortly). IoT technology is about collecting and processing continuous readings from wireless sensors embedded in operational equipment. These tiny electronics devices transmit their readings on heat, weight, counters, chemical content, flow rates, etc., to a nearby computer, referred to as at the “edge,” which does some basic classification and consolidation and then uploads the data to the “cloud,” where some specialist analytic system monitors those readings for anomalies. See also: Insurance and the Internet of Things   The benefits of IoT are already well-established in the fields of equipment maintenance and material processing (see Using Predictive Analytics in Risk Management). Deloitte found that predictive maintenance can reduce the time required to plan maintenance by 20% to 50%, increase equipment uptime and availability by 10% to 20% and reduce overall maintenance costs by 5% to 10%. Just as the advent of streaming video finally made watching movies online a reality, so streaming of data readings has produced a real paradigm shift in traditional metrics monitoring, including being able to make operational predictions up to 20 times earlier and with greater accuracy than traditional threshold-based monitoring systems. Think about it. What if we could achieve these sorts of improvement in risk management? Monitoring Risk Management in Real Time The real innovation from IoT is not from the hardware technology but from the software architecture built to process streaming IoT data. Traditionally, data was collected, then processed and analyzed. Like traditional risk management, it is historic and reactive. Traditional analytics used historical data to forecast what is likely to happen based on the historically set targets and thresholds, e.g. when a sensor hits a critical reading, a release valve would open to prevent overload. Processing and energy has already been expended (lost), and the cause still needs to be rectified. IoT technology continuously streams data and processes it in real time. Streaming analytics attempt to forecast what data is coming. Instead of initiating controls in reaction to what has happened, IoT steaming aims to alter inputs or the system to maintain optimum performance conditions. In an IoT system, inputs and processing are continually being adjusted base on the streaming analytics expectations of future readings. This technology will have its profound and transforming effect on risk management. When it migrates from being used to measure hardware environmental factors to software-based algorithms monitoring system processes and characteristics, we will be able to assess stresses and threats, both operational and behavioral. See also: Predictive Analytics: Now You See It…. In the 2020s, risk management will be heavily driven by KRI metrics, and as such will be a prime target for monitoring by streaming analytics. In addition to obvious environmental monitoring, streaming metrics could be used to monitor in real time staff stress and behavior, mistake (error) rates, satisfaction/complaint levels, process delays, etc. All change over time and can be adjusted in-process to prevent issues arising. In addition to existing general-purpose IoT platforms, such as Microsoft Azure IoT, IBM Watson IoT or Amazon AWS IoT, with the advent of “serverless apps” (this technology exists now), we will see an explosion in mobile apps available from public app stores to monitor every conceivable data flow, to which you will be able to subscribe and plug in to your individual data needs. We can then finally ditch the old reactive PDCA chestnut for the ROI method of process improvement and risk mitigation (see PDCA is NOT Best Practice).

Greg Carroll

Profile picture for user GregCarroll

Greg Carroll

Greg Carroll 
is the founder and technical director, Fast Track Australia. Carroll has 30 years’ experience addressing risk management systems in life-and-death environments like the Australian Department of Defence and the Victorian Infectious Diseases Laboratories, among others.

Digital Innovation in Life Insurance

The focus can move from life protection to life enablement, support that helps people lead a long and healthy life.

There is a view that the life insurance business needs to change or to innovate to remain relevant and reach new customers; if incumbent players don’t make an environment that appeals to new customers, someone else will step in and do it in their place. It’s possible that big data warehouses - or mobile and digital operators - will mass market life and health insurance protection to their customers, based on the data they have about them. While we can’t rule it out, this outcome is unlikely for multiple reasons. For a start, life insurance is not a simple transaction, and access to it is selected. It requires capital, distribution channels, underwriting and claims expertise, product knowledge and actuarial know-how. While these challenges may deter new entrants, they are not insurmountable obstacles. See also: Selling Life Insurance to Digital Consumers   Life and health insurance are closely linked to healthcare provisions; our products align with fundamental biometric events - including ill health, disability, disease and death. It’s no surprise we manitain a mainly medical approach to risk selection. Technology offers new ways to manage risk that rely less on face-to-face disclosure and traditional clinical assessments. This is why we see so much interest in understanding how innovation might work. Healthcare is adopting artificial intelligence, virtual reality, machine learning, sensors and other innovative technologies - including genomics - to deliver a more patient-centric approach. Insurance can similarly transform by adopting more customer-centric solutions. In healthcare, digital channels and mobile health solutions are welcomed when they blend with traditional methods and work simply. For insurers, this could mean placing innovation into spaces where it helps customers and where it makes sense to augment existing ways. See also: 2 Paths to a New Take on Digital   This means the focus is on how we engage with people and offer them services linked to ensuring their health. The focus can move from life protection to life enablement, support that helps people lead a long and healthy life. The industry needs the energy, innovative vision and technical skills of entrepreneurs. In turn, entrepreneurs need the network, customer base and data - as well as the insurance expertise - brought by insurers and reinsurers. Working together, we can create an environment where people will share their personal data, knowing we can be trusted to use it appropriately, to keep it safe and to have it be a force for good. To do this means examining the emerging digital options and working out how to optimize the benefits to our customers.

Empowering Health Through Blockchain

It’s time to demand innovative solutions that leverage enhanced benefit plan design with emerging technology and contextual data.

As the U.S. continues to wrestle with healthcare and how to provide insurance, the country seems to be in a state of flux; many individuals and employers alike question how they will ultimately be affected. Warren Buffett and Charlie Munger have identified healthcare as the biggest issue facing American businesses, and the National Federation of Independent Business ( NFIB) reports that the cost of health insurance is "the most severe" problem facing American small businesses today. The growth in healthcare costs has long been an issue in a monopolized industry controlled by the major health carriers (i.e. Blue Crosses, United, Cigna and Aetna). The problem started spiraling out of control when insurance industry leaders, e.g. MetLife, converted from mutual company structures to stock company structures. When the best interests of the consumer become misaligned with the best interests of the service provider, we create a conflict of interest. After all, their fiduciary duty is to their shareholders, not their consumers. The benefits system in the U.S. has been flawed for many years. It is plagued by a lack of transparency and leaves the employer powerless to fight increased premiums with each renewal, for what is most often their second largest expense next to payroll. It’s time to collectively question the status quo and demand innovative solutions that leverage enhanced benefit plan design with emerging technology and contextual data. Business owners' cost for healthcare should be directly correlated with the health risk and outcome of their employees. All aspects of plan design need to be transparent, and business owners and employees must own their healthcare data, so they can understand exactly what is driving costs and actually control their spending. Viable solutions will come through companies like iXledger, a London-based blockchain insurtech start-up and collaborator with Gen Re that has partnered with online information hub Self Insurance Market to develop a marketplace for the growing self-insurance risk management sector. The marketplace leverages iXledger’s blockchain platform to navigate the complex, data-intensive processes of self-insurance, providing the visibility, workflow and resource management to receive cost-effective bids for appropriate services. See also: What Blockchain Means (Part 2)   The current group benefits market is primarily controlled and monopolized by the Blue Crosses, United, Cigna and Aetna (BUCAs), leading to diminishing provider networks, unclear benefits coverage and consistent premium increases over the last decade. American employees are unable to afford to participate in their own employer’s group medical plan. Aetna recently announced that it will not pay commissions to brokers on groups with fewer than 100 insured lives. Technology alone is not the key to driving down the cost of healthcare and enhancing benefits. The famed health insurance unicorn Oscar has the technology, but only leveraging new tools with legacy processes is not going to yield significant returns. Disruption in healthcare requires a totally new approach, not just new technology to try to enhance the current, monopolized benefit plan offering. Unfortunately, I believe Oscar will continue to lose to the BUCAs, unless it can quickly pivot. Oscar is currently losing roughly $1,750 per member, yet its last capital round provided for a $2.7 billion valuation with 120,000 insured lives, or $22,500 per member. Although Jeff Bezos and other technology leaders have defied all conventional means of valuation across the capital markets, an analysis into Oscar’s business has me a bit stifled. If you look at the member population, 48% of the New York enrollments in 2015 came from the ACA state exchange, who are often high-risk members. Perhaps that is why Oscar's ratio of hospital costs to premiums earned was 75%, compared with 62% at UnitedHealthcare. The lack of capital relative to the BUCAs and Oscar's existing member risk population will make it quite difficult to compete. See also: Blockchain Technology and Insurance   As Oscar shows, the solution to the health benefits crisis in the U.S. will not be driven with just new technology and enhanced analytics, but by integrating enhanced data and new technology, such as telemedicine, with innovative and enhanced benefit plan designs similar to what iXLedger is endeavoring to facilitate. The solution is a paradigm shift requiring new tools that compel new processes to put both employers and employees in control of their cost of healthcare while offering enhanced health benefits coverage.

Steven Schwartz

Profile picture for user StevenSchwartz

Steven Schwartz

Steven Schwartz is the founder of Global Cyber Consultants and has built the U.S. business of the international insurtech/regtech firm Cyberfense.

Common Error on Going Digital

The process that many insurers currently use to capture underwriting data illustrates why digitization alone isn’t enough.

If you’re an insurance professional who follows industry trends, you’ve probably heard the phrase “digital transformation” many times from consultants and industry analysts. And if you’ve been in the business for more than a few years, you’ve likely seen a huge uptick in the use of digital tools. But many insurers mistake digitization, such as collecting forms on a tablet instead of paper, for digital transformation. The truly exciting business developments in digital transformation are found in automation. Digital transformation isn’t about going paperless, though that’s a nice side benefit. It’s not about using apps to support the same old processes. Instead, it’s about rethinking traditional ways of doing business and replacing old processes with intelligent tools that eliminate or reduce the friction in transactions between carriers and customers. It’s about creating processes that address persistent pain points for insurers and policyholders alike. Traditional Processes Don’t Work in the Modern World For property insurers, underwriting without adequate data is a huge pain point. Many carriers either use area averages that may or may not reflect actual property value to assess risk, or they send out an inspector to conduct an assessment and create a report. Neither option is optimal in a world where customers expect personalization, transparency and speed. Assessing risk on averages can result in cancellations that disappoint customers and harm the brand. In-person inspections yield important data, but scheduling a time for the inspector to evaluate the customer’s property and home or business contents can be a time-consuming hassle on both the carrier and customer side. Carriers often wait for weeks or months to receive a report from an inspector. It’s not uncommon for up to 60 days to elapse between the coverage request and receipt of the report, which is frustrating for carriers and customers alike. See also: Digital Playbooks for Insurers (Part 4)   Why Digitization Isn’t Enough The process many insurers currently use to capture underwriting data illustrates why digitization alone isn’t enough. Sending an inspector out with an iPad to file an electronic report might shave a couple of days off the underwriting cycle, but it’s using digital tools to support a process that is fundamentally broken. It’s a solution that doesn’t address the root cause of customer and carrier pain points. Instead of putting digital band-aids on broken procedures, it’s time to rethink processes and change workflows. It’s time to evaluate solutions that go beyond digitization and look for truly transformative technologies that harness data, automation and machine learning to create more efficient, effective processes. And it’s time to apply insurance-centric computer vision to new applications rather than adapting products that weren’t designed to address the unique issues that insurers face. Automated Processes Deliver a Better Experience for Everyone So, how can automation and machine learning improve the customer experience and streamline carrier operations? Recall the underwriting process that is currently painful on both sides of the transaction: Rather than improving it on the margins with digitization, what if insurers reinvented the underwriting process entirely using technology that allows them to remove the friction from key processes and made the experience more personalized for customers while improving pricing transparency? Thanks to automation and machine learning that improve key underwriting processes, this is a reality today: Carriers can provide a link to a customer or third party to conduct an inspection through a smartphone camera lens. A friendly, artificial intelligence (AI)-powered assistant walks users through the inspection, automatically categorizing and inventorying items to create a baseline -- a rich media record of the customer’s property and contents in near real-time. Built-in, insurance-specific computer vision ensures that the AI inspector notices things a good human inspector would, such as the presence (or absence) of a fire extinguisher near an oven. For insurers, this type of breakthrough automation eliminates the problem of delayed quotes and mispriced policies. It also opens new opportunities for the agent and the customer to work together to mitigate risk, which protects customer property and the insurer’s bottom line at the same time. It makes a formerly opaque process clear so that coverage is priced correctly, and, if there is a claim, there are no unpleasant surprises because customers know what they’re buying up-front. From Customer Churn to Customer Delight For customers, an underwriting solution that automates key processes makes getting quotes fast and easy, and it ensures that pricing is completely transparent, so they understand what they’re paying for prior to filing a claim. Using automation in this underwriting process makes it simple to document customer possessions, which provides peace of mind. Using this automated underwriting process, customers can follow step-by-step instructions from a conversational AI assistant, using a program that is intuitive and requires no training. See also: How Underwriting Is Being Transformed   Underwriting that leverages automation makes it easy for customers to work with carriers to price their premiums correctly, identifying items under warranty and making suggestions to improve safety. In the event of a claim, having full documentation of all customer possessions streamlines the claims process. Perhaps most importantly, an underwriting solution that automates key processes allows the agent to focus on customer relationships — enabling agents to be more “heads up” rather than “heads down” over reports and paperwork. The underwriting technology described here isn’t a vision of the future; it exists now, and carriers who are looking for a competitive edge are evaluating AI-powered technologies like this today, so they can not only improve processes but transform them. Sometimes, an opportunity comes along to skip interim steps and embrace a better future. Insurers today have such an opportunity; they can skip digitization and move toward digital transformation by adopting automated, AI-driven processes, and the choice couldn’t be more clear.

Laurie Kuhn

Profile picture for user LaurieKuhn

Laurie Kuhn

Laurie Kuhn is COO and cofounder of Flyreel, the most advanced AI-assisted underwriting solution for commercial and residential properties. She brings 20-plus years of experience in digital innovation to Flyreel, where she leads the company’s product, marketing and operations strategies.

Motto for Success: 'Me, Free, Easy'

Today's products must be customized, inexpensive and simple to configure and understand -- and insurtech makes all three possible.

“Me, Free, Easy.” Oliver Bäte, the CEO of Allianz, gave a CeBIT speech with this headline that covers all revolutionary insurtech activities and gives us the most important hint about what the products for customers will look like in the future. And this is not just applicable for insurance but for all businesses that would like to sell their products to new generations. As every insurance professional knows, the main principle of insurance is the law of large numbers. This theorem describes “the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number should be close to the expected value, and will tend to become closer as more trials are performed” (Wikipedia). At first sight, “me, free, easy” and the law of large numbers could seem to conflict, because the law of large numbers requires doing the same thing over and over while “me, free, easy” customizes almost every step of  the insurance experience according to customers’ needs. But insurtech makes it possible to customize the insurance experiment for every client and produce millions of different versions of products. See also: Insurtech Is Ignoring 2/3 of Opportunity   The biggest advantage of the insurance business is the ease of acquiring customer data. In many circumstances, customers are obliged to provide all the data their insurers ask for. The main dilemma here was how the data should be managed. I said “was” because with insurtech implementations — e.g. artificial intelligence, machine learning or big data — managing millions of details about millions of customers is as easy as shelling peas. Today, insurance companies have magical tools that find the right data for the right time frame for the right customer in a couple of seconds. So, the first feature of the new-age insurance product, “me,” is accomplished! “Free” means a world without intermediaries. It does not matter whenever, whoever, whatever; you can reach, buy and use. The main driver of a world without intermediaries is blockchain. Like many other industries, intermediaries mean checked (relatively trustable) data and incredibly increased operational costs for insurers and customers. Intermediaries reach people, make them possible customers, gather data and represent insurers’ corporate identity in many processes. Is it not too risky leaving your company’s reputation in another’s hands? Undoubtedly, the answer is yes. Thankfully, insurers don’t need to carry this risk any more. This responsibility will be transferred to customers, and all assessments will be performed by customers in accordance with the risk appetite of insurance companies and pre-defined criteria. “Easy” means easily understandable products that do not require sophisticated financial literacy. So, the policy owner does not need intense assistance with products (beforehand and after sales), as well. Possible dependencies are defined, and customers are informed about every detail of the products. The key to "easy" is helping customers configure products to reflect their own needs. Big data management and other disruptive technologies enable insurers to make configurations with their information technologies. And now, we have more “easy” and more “me” insurance products. See also: How to Collaborate With Insurtechs    

Zeynep Stefan

Profile picture for user ZeynepStefan

Zeynep Stefan

Zeynep Stefan is a post-graduate student in Munich studying financial deepening and mentoring startup companies in insurtech, while writing for insurance publications in Turkey.

Too Much Tech Is Ruining Lives

Social media has led to less human interaction, not more. It has suppressed human development, not stimulated it. We have regressed.

Just four years ago, I was a cheerleader. Social media was supposed to be the great hope for democracy. I know because I told the world so. I said in 2014 that no one could predict where this revolution would take us. My conclusion was dusted with optimism: A better-connected human race would find a way to better itself. I was only half right: Nobody could have predicted where we have ended up. Yet my optimistic prognosis was utterly misguided. Social media has led to less human interaction, not more. It has suppressed human development, not stimulated it. As Big Tech has marched onward, we have regressed. Look at the evidence. Research shows that social media may well be making many of us unhappy, jealous and — paradoxically — antisocial. Even Facebook gets it. An academic study that Facebook cited in a blog post revealed that when people spend a lot of time passively consuming information, they wind up feeling worse. Just 10 minutes on Facebook is enough to depress — clicking and liking a multitude of posts and links seems to have a negative effect on mental health. See also: The World Doesn’t Need Silicon Valley   Meantime, the green-eyed monster thrives on the social network: Reading rosy stories and carefully controlled images about the social  and love lives of others leads to poor comparisons with one’s own existence. Getting out in the warts-and-all real world and having proper conversations would provide a powerful antidote. Some chance! Humans have convinced themselves that catching up online is a viable alternative to in-person socializing. And what of consumer choice? Former Google design ethicist Tristan Harris noted in an essay on how technology hijacks people’s minds, that it is actually designed to give us fewer choices, not more. When you do a Google search for a restaurant, for example, you are presented with a limited set of choices, with advertisers appearing at the top of a list. We rarely browse to the second page of search results. Harris likened this to what magicians do: “Give people the illusion of free choice while architecting the menu so that they win, no matter what you choose.” We are becoming unthinkingly reliant — addicted — to ease of use at the expense of quality. We are walking dumpsters for internet content that we don’t need and that might actively damage our brains. The technology industry also uses another technique to keep us hooked: feeding us a bottomless pit of information. This phenomenon’ is the effect Netflix has when it auto-plays the next episode of a show after a cliffhanger and you continue watching, thinking, “I can make up the sleep over the weekend.” The cliffhanger is, of course, always replaced by another cliffhanger. The 13-part season is followed by another one, and yet another. We spend longer in front of the television, yet we feel no more satiated. When Facebook, Instagram and Twitter tack on their scrolling pages and update their news feeds, causing each article to roll into the next, the effect manifests itself again. Perhaps we should go back to our smartphones and, instead of playing Netflix or sending texts on WhatsApp, use their core function. Call up our friends and family and have a chat or — better — arrange to meet them. Meanwhile, Big Tech could carve an opportunity from a crisis. What about offering a subscription to an ad-free Facebook? In return for a monthly fee, searches would be based on quality of content rather than product placement. I would pay for that. The time savings alone when booking a trip would be worth it. See also: No, AI Isn’t Taking Over Firms’ Decisions   Apple pioneered the Do Not Disturb function, which stopped messages and calls waking us from sleep, unless a set of emergency criteria were met by the caller. How about a Focus Mode that turned off all notifications and hid our apps from our home screen, to ease the temptation to play with our phones when we should be concentrating on our work, or talking to our spouses, friends and colleagues? In the 1980s, the BBC in Britain ran a successful children’s series called “Why Don’t You?” that implored viewers to “turn off their TV set and go out and do something less boring instead,” suggesting sociable activities that did not involve a screen. It was wise before its time. The TV seems like a puny adversary compared with the deadening digital army we face today.

Vivek Wadhwa

Profile picture for user VivekWadhwa

Vivek Wadhwa

Vivek Wadhwa is a fellow at Arthur and Toni Rembe Rock Center for Corporate Governance, Stanford University; director of research at the Center for Entrepreneurship and Research Commercialization at the Pratt School of Engineering, Duke University; and distinguished fellow at Singularity University.