Download

Will insurance avoid its own Blockbuster moment?

sixthings

Now, more than ever, insurance executives should be seeking out opportunities to have their assumptions challenged, to understand how technologies can be applied, to embrace the idea that even the most incomprehensibly advanced innovations are easily grasped through the lens of consumers. 

Conferences such as the recently concluded Global Insurance Symposium in Des Moines have managed to establish the ideal environment for both the distillation of actionable information and for fostering very candid discussions.  Such events are part of a larger ecosystem that, for the first time in modern industrial history, has formed in advance of massive technological change and that gives insurance an advantage that other industries haven’t had as they’ve faced major change over the past 20-plus years. 

We can examine cautionary examples like Blockbuster, which in 1994 had a valuation of $8.4 billion. Ten years later, Blockbuster still had 84,300 employees and nearly 10,000 stores, but it was a dead man walking. Its valuation had, in fact, fallen to $4.7 billion by year end 1997, and by 2004 it was too late for Blockbuster to reverse its fortunes.

What happened? Amazon was founded on July 5, 1995. Netflix launched Aug. 29, 1997. YouTube launched on Feb. 14, 2005. But the video store industry was focused on the video store industry and didn’t see that it was doomed almost the day that Blockbuster hit its 1994 peak. None of the global video rental brick and mortar chains invested in the launch of any of those new technologies because the chains didn’t see the massive effects the startups would have.

If Blockbuster had had a warning system in place, might the outcome have been different?

Consider the adoption curves represented in these two charts from the World Economic Forum.

The chart on the left renders the adoption curve of household technologies, pre-internet. The adoption curves on the right are primarily post the emergence of e-commerce, as well as depicting the advent of mobile technologies. Note the exponentially shorter adoption curves.

Those charts show that, while we can use Blockbuster and other cases to learn from the past, we also have to realize that the pace of change is increasing and that we need to accelerate with it. Although it’s generally accepted that the insurance industry is in the early stages of a sea change, the great irony is that the momentum is building based on business models and technologies that represent relatively incremental progress.

Insurtechs represent significant improvements in the practice of managing known risks. But that’s not enough for the insurance industry to keep up. There is a tsunami of risktechs coming that are dedicated to reinventing risk. These are the firms, funded with nearly $350 billion in 2017 alone, that believe the losses we have experienced for the past century, or two, need not continue. These companies, like those that devastated Blockbuster, rely on technological breakthroughs that have been in the works for over a decade already, by the way.  

Again, no industrial sector upended by technology had the opportunity to benefit from the ideal trifecta of capital on hand, advance notice and the emergence of an ecosystem totally dedicated to the success of the incumbents. Insurance has all the tools needed to identify and deal with the fast pace of change that the emerging risktech competitors represent. What remains to be seen is whether existing insurance industry firms will leverage vision, capital, technologies, time and a support ecosystem to create the next great growth cycle. 

This is a time for giant killers, historic circumstances that level playing fields, filled with opportunities that favor the focused. Time will tell if the unique circumstances favoring success through action will be leveraged by those who commit to clarity and growth and see past the hype and chaos.

Guy Fraker
Chief Innovation Officer
Insurance Thought Leadership


Insurance Thought Leadership

Profile picture for user Insurance Thought Leadership

Insurance Thought Leadership

Insurance Thought Leadership (ITL) delivers engaging, informative articles from our global network of thought leaders and decision makers. Their insights are transforming the insurance and risk management marketplace through knowledge sharing, big ideas on a wide variety of topics, and lessons learned through real-life applications of innovative technology.

We also connect our network of authors and readers in ways that help them uncover opportunities and that lead to innovation and strategic advantage.

Seeing Through Digital Glasses

The digitization of assets is just the first step on a pathway that will lead to the next phase: creating the digital experience.

|
Extraordinary change is taking place throughout the insurance industry and everything that surrounds it. The whole world is going digital. It’s the new reality, and there is no getting around it. But just the word or even the idea of digital has many people, insurers included, pondering the basic and fundamental questions: Why do we need to be digital? How do we go digital? And what, exactly, is digital anyway? To get a visual of what digital means for insurance, look through the lens of digital transformation. Defining Digital Defining digital is important because it is a very broad topic with far-reaching implications. Simply put (and without getting into the science behind it), digital is a way of doing things. And becoming digital is a state that will be required for moving around in the digital world: communicating, shopping, traveling, doing business, keeping a competitive edge and much, much more. See also: Future of Digital Transformation   Going Digital Digital is such a broad concept that it is easy to get too narrowly focused on just one part of the everything that makes up digital. For example, digital is a format for storing assets of many kinds; it is a way of transmitting data and information from one place to another, and it is a way of interacting with data. But that’s not all. The digitization of assets is just the first phase or step on a pathway that will lead to the next phase of digital: creating the digital experience. The experiential phase of digital begins when digital assets come into use. Now digital becomes the method that undergirds the interaction. Digital assets are transmitted digitally to create digital experiences via portals, mobile apps, websites, sensors, wearables and many other digital things. In the third phase, digital transformation, digital presence and capabilities go beyond to an expanded and progressive digital experience that touches everything, connects the parts, resets the expectations, broadens the horizons and transforms the lives of everyone it touches. Being Digital For insurers, the journey of digital transformation will involve rethinking an insurer’s value proposition and internal business operations, embracing data and advanced analytics, creating and supporting all means of engagement and automation across all the company and delivering enriched customer solutions that are thoroughly integrated with internal operations as a seamless, personalized experience from start to finish and everywhere in between. Digital is the thread that will connect and unite all systems, processes and strategic initiatives and tie them together into an enterprise that is ready for the future. And digital transformation is the strategic initiative that will set the foundation, set the context and set the direction for the next-generation insurance company. See also: Digital Transformation: How the CEO Thinks   In our new report, Digital Transformation in Insurance: Discovering the Pathway to Digital Maturity, SMA introduces its Digital Maturity Model, a model for developing a digital strategy that will be fundamental to success in the digital world, both today and tomorrow.  Click here for a copy.

Deb Smallwood

Profile picture for user DebSmallwood

Deb Smallwood

Deb Smallwood, the founder of Strategy Meets Action, is highly respected throughout the insurance industry for strategic thinking, thought-provoking research and advisory skills. Insurers and solution providers turn to Smallwood for insight and guidance on business and IT linkage, IT strategy, IT architecture and e-business.

Key for Hiring Successful Producers

Here’s a chilling stat on the need for a great onboarding process: For every two producers you hire, only one will pan out.

Here’s a chilling stat: For every two producers you hire, only one will pan out. According to a well-referenced study by Reagan Consulting, just over half of new agents and brokers are successful. The other 44% wash out before they can be of value to the organization. The picture gets even more ominous when you consider that as many as 60% of firms aren’t hiring enough producers to meet their growth goals. That makes retaining producers and validating them quickly particularly important. Plenty of organizations have robust hiring departments and devote a lot of resources to attracting and landing top candidates. But far too many firms squander this top talent once a new employee is through the front door, and good producers end up leaving the organization. This isn’t only a drain on resources; with the average cost to replace an employee amounting to as much as two times his or her annual salary, it hurts your bottom line, too. While poor hires likely lead to some of the turnover, the more common cause is subpar onboarding and early employee training programs. See also: Why You Need Happy Producers (Part 2)   The benefits of a formal onboarding program Every organization has an onboarding program, whether they realize it or not. Keep a new hire waiting in the lobby until his direct supervisor shows up a half-hour later? Shuffling the new hire from introduction to introduction and forgetting to tell him where the bathroom is? That’s your onboarding program. Undoing those early negative impressions is a real uphill battle. Study after study has shown that formalizing your onboarding program is key to maximizing its success. Employees are 69% more likely to stay with an organization for up to three years if it has a formalized onboarding process, according to the Society for Human Resource Management (SHRM). Considering Reagan Consulting’s finding that it takes an average of 32 months to validate successful producers, keeping agents and brokers around beyond the three-year mark can have a huge impact on an agency’s bottom line. An effective, formal onboarding process doesn’t just keep employees around longer, it can also significantly expedite the time to validate and improve employee engagement. SHRM researchers identified additional advantages that ultimately benefit both employees and employers:
  • Higher job satisfaction
  • Organizational commitment
  • Higher performance levels
  • Career effectiveness
  • Lowered stress
What’s more, a formal onboarding process can help managers more quickly identify producers who aren’t a good fit for the organization. Spotting red flags during the onboarding process can help you make adjustments (up to and including termination) before you’ve spent too much time and resources on the employee. This enables organizations to identify the financial impact—positive or negative—earlier in the validation process and adjust accordingly. Effectively onboarding all new producers Your formalized producer-onboarding process needs to be consistent across all new producer hires to ensure more useful benchmarks and metrics. But developing a program that works for the many different kinds of producers is challenging. Each hire brings his or her own set of experiences and gaps in skills. Broadly, new producers typically fall into one of four categories based on their experience with sales and the insurance industry:
  • No experience — little to no familiarity with sales or the insurance industry
  • Sales experience — familiarity with the sales process but little to no exposure to the insurance industry and agent/broker processes
  • Insurance experience — knowledge of the industry but little to no sales experience
  • Established producer — proven agent or broker who may have worked for a competitor
Onboarding processes need to add value for all of these groups. A robust, fully formed onboarding program should begin well before an employee’s first day and extend well past the six-month mark. But the entire program doesn’t have to be built at once. Start with something as simple as sending an email. When Google reminded hiring managers to plan an employee’s first day, the new hire got up to speed 25% faster and reached peak productivity a full month earlier, all thanks to that leg up on day one. See also: Happy Producers, Happy Customers   Focus on onboarding efforts that give producers a better idea of what your company’s all about, beginning with company culture and specific company processes and procedures. When done right, explaining HR policies and company handbook rules can reveal a lot about your organizational culture. As you work to formalize your onboarding process, make sure to regularly check in with managers and new hires to verify the effectiveness of your efforts and find areas for improvement.

Ann Myhr

Profile picture for user AnnMyhr

Ann Myhr

Ann Myhr is senior director of Knowledge Resources for the Institutes, which she joined in 2000. Her responsibilities include providing subject matter expertise on educational content for the Institutes’ products and services.

Low-Risk Doesn’t Mean No-Risk

Myths often undercut the true dangers of flooding and leave home and business owners across the country woefully underprepared.

From “I thought homeowners insurance covered that,” to “I’m in a low risk flood zone, so I don’t need flood insurance,” flood insurance agents have heard all the myths about why homeowners don’t need flood insurance. Unfortunately, these myths often undercut the true dangers of flooding and leave home and business owners across the country woefully underprepared if a flood event does occur. In 2017 alone, the National Oceanic and Atmospheric Administration reported 16 separate disasters in the U.S., each with damages exceeding $1 billion, which generated total record losses in excess of $306 billion. A large percentage of these damages were caused by flooding, including damages associated with Hurricane Harvey in Texas and Louisiana, and record high water levels in Missouri, Arkansas and Illinois. While this devastation has certainly brought the conversation about flooding and flood damage back to the forefront, it also may have relayed a subtle, worrisome message to others: Flooding events only affect certain regions at certain times of the year, and 2017 was an anomaly. The reality is quite the opposite. When it comes to flooding, these disasters can happen any time of the year, anywhere across the country. In fact, 25% of all flood damage in the U.S. occurs in what are classified as low- to moderate-risk flood zones by the National Flood Insurance Program (NFIP), and destructive flood events have occurred in 98% of counties across the country. This is especially troubling when, as the NFIP estimates, just one inch of water intrusion can cause more than $20,000 in damages, and the average NFIP claim is around $43,000. In addition, one-third of FEMA disaster assistance goes to properties in these low- to moderate-risk zones, but it’s rarely enough to cover the damages. The data from these costly flood events reveals a story of loss and hardship that needs to be more widely understood, and now—with the bevy of new private flood insurance options in place—is the perfect time for agents and organizations to research the facts and widely broadcast the message to the home and business owners they protect. What Agents Need to Know Flooding is not restricted to heavy tropical storms. It takes multiple forms that can affect other regions in the U.S., including rapid rainfall or structural failure leading to flash floods, spring snowmelt, changing weather patterns, clogged rainwater systems and new building development. Flooding can even be triggered by drought and wildfires. Why? Because wildfires and droughts alter soil conditions, leading to reduced absorption and increased runoff during any heavy rain that follows. See also: Future of Flood Insurance   Simply put, flooding is the costliest and most common natural disaster in the U.S., especially considering that it can strike any time of year. Ultimately, a single flood event can destroy a property and wreak irreparable damage on the finances of uninsured home and business owners. Yet, despite these dangers, only 12% of homeowners have flood insurance. For those in low-risk areas or those without mortgage loans not required by their lenders to purchase NFIP policies, this is a huge problem. Consider the following:
  • A March 2017 survey by InsuranceQuotes found that 56% of respondents mistakenly believed that a standard homeowners policy covers flood damage.
  • Most business insurance policies exclude flood damage.
  • FEMA flood maps may be outdated, and flood risks are changing rapidly. Some research suggests that current maps vastly underestimate those in the high-risk, 1-in-100-year floodplain, with one study suggesting that 40 million Americans, instead of the current estimate of 13 million, are at high risk.
  • Hurricane Harvey hit low-risk flood areas, and fewer than 20% of homes damaged in Hurricane Harvey were flood-insured. In total, just 15% of all the 1.8 million homes in Harris County (Houston) had flood insurance, including only 28% of the homes in high-risk areas.
  • NFIP policies in low-risk, preferred risk areas are as low as $500 a year, while a flood claim averages $43,000.
Collectively, these statistics highlight the importance of flood insurance for all Americans, no matter where they live. As agents, it’s our job to understand the potential risks in our communities and educate clients—and potential clients—on their flood risk to advise them in the best possible way. What Are the Solutions? For agents and others in the insurance industry, spreading the flood risk message requires perseverance and a keen understanding of the different insurance products that are available. While it’s important that owners and tenants know their FEMA-assigned flood risk, it’s also key that they understand the shifting nature of flood risk: that the dangers are changing, and every property is at risk at any time of year. Additionally, while government-backed NFIP policies are a critical way for many Americans to secure coverage, they are not the only option. Some homes and businesses may need excess coverage to cover up to replacement cost, and still other eligible properties may benefit from the array of coverage options available with private insurance products instead of an NFIP plan. That’s why it’s critical for agents to analyze the true needs of our clients and regularly communicate any changes or updates that may be necessary. The NFIP provides coverage limits up to $250,000 for the structure of a home, and up to $100,000 for personal possessions. For business owners, coverage limits are up to $500,000 for a structure and $500,000 for contents. Depending on the value of their structure and belongings, some home and business owners may need excess coverage, provided by private insurers to supplement their NFIP policies and provide additional coverage options. For example, NFIP policies do not cover additional living expenses—like the rental of a hotel room if a family is displaced from their home—or coverage for basements and pools. Private flood insurance policies, however, may provide coverage for these things and more, to better serve consumers. Homeowners and renters aren’t the only ones who can be helped by private flood insurance options. Often, much of the focus during catastrophic flood events is on homes and families, but the effect on businesses is just as devastating. Buildings and inventories have been destroyed, computer equipment wiped out and workers left with nowhere to go—and no job left to do. Often, the cost of recovery from a flood is too much for a business to manage, with 40% of small businesses never re-opening their doors after a disaster. See also: Hurricane Harvey: A Moment of Truth   The fact is, whether a tenant or building owner, an uninsured company can lose everything in an instant when the water rises. The message agents deliver to business owners needs to mirror the one delivered to homeowners: The risks of flooding are high, and the cost of inaction could be financially unbearable. Agents need to make a special effort to educate business owners on the topic. After all, a company that is willing to invest in anti-theft measures, IT protection and liability insurance should recognize the real financial dangers that floods can deliver. At the end of the day, private flood insurance options will only be embraced if agents and their clients are fully educated about the risk of flood, the limits of NFIP and all the changing options available. The bottom line is: Flood insurance options are expanding, and we as agents need to understand the full array of flood coverage possibilities available to ensure that those we serve have the opportunity to choose the coverage that will best protect their homes, their families and their businesses.

Patty Templeton-Jones

Profile picture for user PattyJones

Patty Templeton-Jones

Patty Templeton-Jones serves Wright Flood as the president and chief program advocate working cooperatively with FEMA/NFIP as well as congressional representatives to recommend flood reform solutions and review practical impacts of current and future flood reforms.

End of the Road for OBD in UBI Plans?

In theory, there are benefits from reading vehicle data and being connected to the car, but the reality has proven massively different.

I recently attended a telematics event in Brussels and had an interesting discussion about the future of on-board diagnostics (OBD) in auto insurance. I have been in the European telematics usage-based insurance (UBI) space for a long time and have seen all sorts of solutions adopted by insurers when launching programs to consumers: hidden black boxes, windscreen devices, battery-mounted devices and tags, all with different types of success. I have rarely seen OBDs succeed. In theory, there are benefits from reading vehicle data and being connected to the car, but the reality has proven massively different. First of all, OBDs prove to be inconvenient for consumers. Each vehicle has a different position for the port, and unless consumers are carefully guided they simply won’t find it. If they do, the ports can be in inconvenient places, which either makes the device an eyesore in the car or annoying because it can detach when the driver gets into and out of the car. Some less-expensive OBD models, without GPS and GSM, can be paired with phones, but even this experience has never been straightforward due to different Bluetooth standards. So the promise of self-installing really did not work out. Car manufacturers don’t help the situation. They continuously update their vehicle software, which can cause compatibility problems for OBD makers every time a new model comes to market. Guess who discovers this first? Consumers. OBDs proved to be inconvenient for insurers. When insurers launch a new UBI program, they want to make sure the data is standardized across all available vehicles. But with all their issues with compatibility and installation, OBDs in Europe have never been able to deliver the standardization that make the driving data interesting for insurers on a large scale. See also: Advanced Telematics and AI OBDs have had some success in countries like the U.S., mainly due to different OBD data standards, bigger cars and more consumer awareness. But even in the U.S., insurers are abandoning OBDs for smartphones, which can provide better customer experiences and adoption rates. But perhaps most damaging of all, car makers are starting to limit access to the OBD port to protect consumers from hackers and bad experiences. Ultimately, the port has been created for diagnostics purposes years ago but lately used by hardware providers for different purposes. Organizations interested in accessing vehicle data will probably be driven by OEMs directly to access driving data from the cloud with highly secure access systems – not from the vehicle itself. This is why we won’t see many insurers launching new OBD-based UBI programs.

Private Options for Flood Insurance

Seven questions that simplify the complexities of flood insurance in the midst of regulatory changes and extreme weather events.

Congress has once again extended the current mandate for the National Flood Insurance Program (NFIP). If you have used the NFIP in the past to deliver flood insurance to clients, you know all too well that trying to simplify the complexities of flood insurance in the midst of regulatory changes and extreme weather events is an important yet arduous undertaking. The industry is filled with constant change, and people need alternatives. This series of guiding questions can help facilitate conversations with your clients so you can work together to thoughtfully explore all options for protecting their property and valued possessions. 7 Guiding Flood Insurance Questions Does your client need specialized flood insurance coverage? Consider flood insurance coverage in terms of the specifics of the property and the property owner. Is your client a landlord? Is your client on a fixed income? Is this person holding properties for income-generating purposes? By understanding the needs of your clients, you can more effectively navigate the suite of flood insurance options available today. Private flood insurance enables property owners to supplement the NFIP product today, providing coverage that homeowners expect from their homeowners policies for exposures such as outdoor property, detached structures, swimming pools and basements. See also: Future of Flood Insurance   Does your client have a finished basement or pool? The NFIP does not cover personal property in basements, so displaced homeowners or homeowners with built-out basements are responsible for these bills. If a storm surge dumps a ton of sand into your client’s pool, is your client prepared to shoulder the costs of the resulting clean-up? By understanding your client’s lifestyle and property usage, you can deliver meaningful solutions. Private options can help. Does your client’s property value exceed $250,000? The value of custom-built homes continues to increase, with replacement costs rising well above $250,000, the current limit on government-issued coverage. Now, owners of residential homes have options with higher coverage limits at affordable rates through private flood insurance programs. Would your client need assistance for additional living expenses if they experience flood damage? When weighing coverage options remember that the NFIP does not cover additional living expenses. With a private flood policy, your client can opt to add additional living expense coverage. This valuable coverage helps homeowners that have been displaced as a result of a flood by covering the costs of shelter and meals. Are your client’s personal belongings valued at more than $100,000? Consider your client’s property holdings beyond the physical structures she owns. For example, if your client is a landlord or holding income-generating properties, she typically doesn’t need contents coverage. However, some clients may need more coverage than what is available from the NFIP to protect their personal treasures. Would your clients prefer an easy application process without the hassle of submitting photographs or an elevation certificate? Speed of delivery and streamlined processes of today’s private flood insurance options are increasingly attractive to clients. Plus, property owners can often obtain a quote without an elevation certificate and without providing property photographs. See also: Time to Mandate Flood Insurance?   Would you like to save your clients money by avoiding federal surcharges or reserve fund assessments? Private products are not subject to federal surcharges or reserve fund assessments and may be less expensive to purchase than NFIP flood insurance. With the NFIP reauthorization debate continuing, Congress struggles to make flood insurance affordable and improve claims standards. Discussions continue around the development and delivery of dependable, disciplined, reliable private insurance to help more people protect their financial livelihood. Presenting private flood insurance options not only helps your clients make more informed decisions, it enhances the value you bring to your relationship as you work together to help them protect what matters most – their families, homes and treasured possessions. Today, private flood insurance is available in every state, through multiple channels and multiple locations. Companies have the capacity to step in and offer a suite of comprehensive private options for their clients. Private flood insurance is embedded into many brand name lenders that facilitate a loan closing and help Americans get into their dream home, without interruptions.

John Dickson

Profile picture for user JohnDickson

John Dickson

John Dickson is president and CEO of Aon Edge. In this role, Dickson oversees the delivery of primary, private flood insurance solutions as an alternative to federally backed flood insurance.

How to Innovate With Microservices (Part 3)

The microservices architecture solves a number of problems with legacy systems and can continue to evolve over time.

|
In Part 2 of this blog series, we shared how a microservices architecture is applicable for the insurance industry and how it can play a big role in insurance transformation. This is especially true because the insurance industry is moving to a platform economy, with heavy emphasis on the interoperability of capabilities across a diverse ecosystem of partners. In this segment, we will share our views on best practices for adopting a microservices architecture to build new applications and transform existing ones. Now that we have made a sufficient case exploring microservice architecture’s abilities to bring speed, scale and agility to IT operations, we should contemplate how we can best think about microservices. How can we transform existing monoliths into a microservices architecture? Although the approach for designing microservices may vary by organization, there are best practices and guidelines that can assist teams in the midst of making these decisions. How many microservices are too many? Going “too micro” is one of the biggest risks for organizations that are still new to microservices architectures. If a “thesis-driven” approach is adopted, there will be a tendency to build many smaller services. “Why not?” you may ask. “After all, once we buy into the approach, shouldn’t we just go ‘all in’?” We encourage insurers to be careful and test the waters. We would caution against starting out with too many smaller services, due to the increased complexity of mixed architectures, the steep curve of upfront design and the significant changes in development processes as well as a lack of DevOps preparedness. We suggest a “use-case-driven” approach. Focus on urgent problems, where rapid changes are needed by the business to overcome system-inhibiting issues, and break the monolith module into multiple microservices that will serve current needs and not necessarily finer microservices based on assumptions about future needs. Remember, if we can break the monolith into microservices, then later we can make microservices more granular as needed, instead of incurring the complexity of too many microservices without an assurance of future benefits. What are the constraints (lines of code, language, etc.) for designing better microservices? There is a lot of myth about the number of lines of code, programming languages and permissible frameworks (just to name a few) for designing better microservices. There is an argument that if we do not set fixed constraints on numbers of lines of code per microservice, then the service will eventually grow into a monolith. Although it is a valid thought, an arbitrary size limit on lines of code will create too many services and introduce costs and complexity. If microservices are good, will “nanoservices” be even better? Of course not. We must ensure that the costs of building and managing a microservice are less than the benefit it provides — hence, the size of a microservice should be determined by its business benefit instead of lines of code. Another advantage of a microservices architecture is the interoperability between microservices, regardless of underlying programming language and data structure. There is no one framework, programming language or database that is better-suited than another for building microservices. The choice of technology should be made based on underlying business benefits that a particular technology provides for accomplishing the purpose of microservices. Preparing for this kind of flexible framework will give insurers vital agility moving forward. See also: Are You Innovating in the Dark?   How do microservices affect development processes? A microservices architecture promotes small, incremental changes that can be deployed to production with confidence. Small changes can be deployed quickly and tested. Using a microservices architecture naturally leads to DevOps. The goal is to have better deployment quality, faster release frequency and improved process visibility. The increased frequency and pace of releases mean you can innovate and improve the product faster. Putting a DevOps pipeline with continuous integration and continuous deployment (CI/CD) into practice requires a great deal of automation. This requires developers to treat infrastructure as code and policy as code, shifting the operational concerns about managing infrastructure needs and compliance from production to development. It is also very important to implement real-time, continuous monitoring, alerting and assessment of the infrastructure and application. This will ensure that the rapid pace of the deployment remains reliable and promotes consistent, positive customer experiences. To validate that we are on right path, it is important to capture some matrices on the project. Some of the key performance indicators (KPIs) we like to look at are:
  • MTTR – The mean time to respond as measured from the time a defect was discovered until the correction was deployed in production.
  • Number of deploys to production – These are small, incremental changes being introduced into production through continuous deployment.
  • Deployment success rate – Only 0.001% of AWS deployments cause outages! When done properly, we should see a very high successful deployment ratio.
  • Time to first commit – This is the time it takes for a new person joining the team to release code to production. A shorter time indicates well-designed microservices that do not carry the steep learning curve of a monolith.
Principles for Identifying Microservices and Examples More important than the size of the microservice is the internal cohesion it must have, and its independence from other services. For that, we need to inspect the data and processing associated with the services. A microservice must own its domain data and logic. This leads to a domain-driven design pattern. However, it is often possible to have a complex domain model that can be better presented in interconnected multiple small models. For example, consider an insurance model, composed of multiple smaller models, where the party model can be used as claim-party and also as insured (and various others…). In such a multi-model scenario, it is important to first establish a context with the model called bounded context that closely governs the logic associated with the model. Defining the microservice for a bounded context is a good start, because they are closely related. Along with bounded context, aggregates that are used by the domain model that are loosely coupled and driven by business requirements are also good candidates for microservices, as long as they exhibit the main design tenets of the microservices; for example, services for managing vehicles as an aggregate of a policy object. While most microservices can be easily identified by following the domain model analysis, there are a number of cases where the business processing itself is stateless and does not result in a modification of the data model itself, for example, identifying the risk locations within the projected path of a hurricane. Such stateless business processes, which follow the single responsibility principle, are great candidates for microservices. If these principles are applied correctly, loosely coupled and independently deployable services will follow the single responsibility model without causing chattiness across the microservices. They can be versioned to allow client upgrades, provide fallback defaults and be developed by small teams. Co-Existing With Legacy Architecture Microservices provide a perfect tool for refactoring the legacy architecture. This can be done by applying the strangler pattern. This gives a new life to legacy applications by first moving the business functions that will benefit the most gradually as microservices. Applying this pattern requires a façade that can intercept the calls to the legacy application. A modern digital front end, which can offer a better UX and provides connectivity to a variety of backends by leveraging EIP, can be used as strangler façade to connect to existing legacy applications. Over time, those services can be built directly using a microservices architecture, by eliminating calls to legacy application. This approach is more suited to large, legacy applications. Within smaller systems that are not very complex, the insurer may be better off rewriting the application. How to make organizational changes to adopt microservices-driven development Adopting microservices-driven development requires a change in organization culture and mindset. The DevOps practice tries to shift siloed operation responsibilities to the development organization. With the successful introduction of microservices best practices, it is not uncommon for the developers to do both. Even when the two teams exist, they have to communicate frequently, increase efficiencies and improve the quality of services they provide to customers. The quality assurance, performance testing and security teams also need to be tightly integrated with the DevOps teams by automating their tasks in the continuous delivery process. See also: Who Is Innovating in Financial Services?   Organizations need to cultivate a culture of sharing responsibility, ownership and complete accountability in microservices teams. These teams need to have a complete view of the microservice from a functional, security and deployment infrastructure perspective, regardless of their stated roles. They take full ownership for their services, often beyond the scope of their roles or titles, by thinking about the end customer’s needs and how they can contribute to solving those needs. Embedding the operational skills within the delivery teams is important to reduce potential friction between the development and operations team. It is important to facilitate increased communication and collaboration across all the teams. This could include the use of instant messaging apps, issue management systems and wikis. This also helps other teams like sales and marketing, thus allowing the complete enterprise to align effectively toward project goals. As we have seen in these three blogs, the microservices architecture is an excellent solution to legacy transformation. It solves a number of problems and paves the path to a scalable, resilient system that can continue to evolve over time without becoming obsolete. It allows rapid innovation with positive customer experience. A successful implementation of the microservices architecture does, however, require:
  • A shift in organization culture, moving infrastructure operations to development teams while increasing compliance and security
  • Creation of a shared view of the system and promoting collaboration
  • Automation, to facilitate continuous integration and deployment
  • Continuous monitoring, alerting and assessment
  • A platform that can allow you to gradually move your existing monolith to microservices and also natively support domain-driven design
his article was written by Sachin Dhamane and Manish Shah.

Denise Garth

Profile picture for user DeniseGarth

Denise Garth

Denise Garth is senior vice president, strategic marketing, responsible for leading marketing, industry relations and innovation in support of Majesco's client-centric strategy.

Blockchain: Bad Tech, Worse Vision

Even the most prominent blockchain company, Ripple, doesn’t use blockchain in its product. You read that right.

Blockchain is not only lousy technology but a bad vision for the future. Its failure to achieve adoption to date is because systems built on trust, norms and institutions inherently function better than the type of no-need-for-trusted-parties systems blockchain envisions. That’s permanent: No matter how much blockchain improves, it is still headed in the wrong direction. This December, I wrote a widely circulated article on the inapplicability of blockchain to any actual problem. People objected mostly not to the technology argument, but, rather, hoped that decentralization could produce integrity. Let’s start with this: Venmo is a free service to transfer dollars, and bitcoin transfers are not free. Yet, after I wrote an article last December saying bitcoin had no use, someone responded that Venmo and Paypal are raking in consumers’ money, and people should switch to bitcoin. What a surreal contrast between blockchain’s non-usefulness/non-adoption and the conviction of its believers! It’s so entirely evident that this person didn’t become a bitcoin enthusiast because he was looking for a convenient, free way to transfer money from one person to another and discovered bitcoin. In fact, I would assert that there is no single person in existence who had a problem he wanted to solve, discovered that an available blockchain solution was the best way to solve it and therefore became a blockchain enthusiast. The number of retailers accepting cryptocurrency as a form of payment is declining, and its biggest corporate boosters, like IBM, NASDAQ, Fidelity, Swift and Walmart, have gone long on press but short on actual rollout. Even the most prominent blockchain company, Ripple, doesn’t use blockchain in its product. You read that right: The company Ripple decided the best way to move money across international borders was to not use Ripples. A blockchain is a literal technology, not a metaphor Why all the enthusiasm for something so useless in practice? People have made a number of implausible claims about the future of blockchain—like that you should use it for AI in place of the type of behavior-tracking that Google and Facebook do, for example. This is based on a misunderstanding of what a blockchain is. A blockchain isn’t an ethereal thing out there in the universe that you can “put” things into; it’s a specific data structure, a linear transaction log, typically replicated by computers whose owners (called miners) are rewarded for logging new transactions. There are two things that are cool about this particular data structure. One is that a change in any block invalidates every block after it, which means that you can’t tamper with historical transactions. The second is that you only get rewarded if you’re working on the same chain as everyone else, so each participant has an incentive to go with the consensus. The result is a shared definitive historical record. What’s more, because consensus is formed by each person acting in his own interest, adding a false transaction or working from a different history just means you’re not getting paid and everyone else is. Following the rules is mathematically enforced—no government or police force need come in and tell you the transaction you’ve logged is false (or extort bribes or bully the participants). It’s a powerful idea. So in summary, here’s what blockchain-the-technology is: “Let’s create a very long sequence of small files — each one containing a hash of the previous file, some new data and the answer to a difficult math problem — and divide up some money every hour among anyone willing to certify and store those files for us on their computers.” See also: How Insurance Can Exploit Blockchain Now, here’s what blockchain-the-metaphor is: “What if everyone keeps their records in a tamper-proof repository not owned by anyone?” An illustration of the difference: In 2006, Walmart launched a system to track its bananas and mangoes from field to store. In 2009, Walmart abandoned the system because of logistical problems getting everyone to enter the data, and in 2017 Walmart re-launched it (to much fanfare) on blockchain. If someone comes to you with “the mango-pickers don’t like doing data entry,” “I know: let’s create a very long sequence of small files, each one containing a hash of the previous file” is a nonsense answer, but “What if everyone keeps their records in a tamper-proof repository not owned by anyone?” at least addresses the right question! Blockchain-based trustworthiness falls apart in practice People treat blockchain as a “futuristic integrity wand”—wave a blockchain at the problem, and suddenly your data will be valid. For almost anything people want to be valid, blockchain has been proposed as a solution. It’s true that tampering with data stored on a blockchain is hard, but it’s false that blockchain is a good way to create data that has integrity. To understand why this is the case, let’s work from the practical to the theoretical. For example, let’s consider a widely proposed use case for blockchain: buying an e-book with a “smart” contract. The goal of the blockchain is, you don’t trust an e-book vendor, and the vendor doesn't trust you (because you’re just two individuals on the internet), but, because of blockchain, you’ll be able to trust the transaction. In the traditional system, once you pay you’re hoping you’ll receive the book, but once the vendor has your money the vendor doesn't have any incentive to deliver. You’re relying on Visa or Amazon or the government to make things fair—what a recipe for being a chump! In contrast, on a blockchain system, by executing the transaction as a record in a tamper-proof repository not owned by anyone, the transfer of money and digital product is automatic, atomic and direct, with no middleman needed to arbitrate the transaction, dictate terms and take a fat cut on the way. Isn’t that better for everybody? Hmm. Perhaps you are very skilled at writing software. When the novelist proposes the smart contract, you take an hour or two to make sure that the contract will withdraw only an amount of money equal to the agreed-upon price, and that the book — rather than some other file, or nothing at all — will actually arrive. Auditing software is hard! The most heavily scrutinized smart contract in history had a small bug that nobody noticed — that is, until someone did notice it and used it to steal $50 million. If cryptocurrency enthusiasts putting together a $150 million investment fund can’t properly audit the software, how confident are you in your e-book audit? Perhaps you would rather write your own counteroffer software contract, in case this e-book author has hidden a recursion bug in his version to drain your ethereum wallet of all your life savings? It’s a complicated way to buy a book! It’s not trustless; you’re trusting in the software (and your ability to defend yourself in a software-driven world), instead of trusting other people. Another example: the purported advantages for a voting system in a weakly governed country. “Keep your voting records in a tamper-proof repository not owned by anyone” sounds right — yet is your Afghan villager going to download the blockchain from a broadcast node and decrypt the Merkle root from his Linux command line to independently verify that his vote has been counted? Or will he rely on the mobile app of a trusted third party — like the nonprofit or open-source consortium administering the election or providing the software? These sound like stupid examples — novelists and villagers hiring e-bodyguard hackers to protect them from malicious customers and nonprofits whose clever smart-contracts might steal their money and votes?? — until you realize that’s actually the point. Instead of relying on trust or regulation, in the blockchain world, individuals are on-purpose responsible for their own security precautions. And if the software they use is malicious or buggy, they should have read the software more carefully. The entire worldview underlying blockchain is wrong You actually see it over and over again. Blockchain systems are supposed to be more trustworthy, but in fact they are the least trustworthy systems in the world. Today, in less than a decade, three successive top bitcoin exchanges have been hacked, another is accused of insider trading, the demonstration-project DAO smart contract got drained, crypto price swings are 10 times those of the world’s most mismanaged currencies and bitcoin, the “killer app” of crypto transparency, is almost certainly artificially propped up by fake transactions involving billions of literally imaginary dollars. Blockchain systems do not magically make the data in them accurate or the people entering the data trustworthy; they merely enable you to audit whether the chain has been tampered with. A person who sprayed pesticides on a mango can still enter onto a blockchain system that the mangoes were organic. A corrupt government can create a blockchain system to count the votes and just allocate an extra million addresses to cronies. An investment fund whose charter is written in software can still misallocate funds. How then, is trust created? In the case of buying an e-book, even if you’re buying it with a smart contract, instead of auditing the software you’ll rely on one of four things, each of them characteristics of the “old way”: Either the author of the smart contract is someone you know of and trust, the seller of the e-book has a reputation to uphold, you or friends of yours have bought e-books from this seller in the past successfully or you’re just willing to hope that this person will deal fairly. In each case, even if the transaction is effectuated via a smart contract, in practice you’re relying on trust of a counterparty or middleman, not your self-protective right to audit the software, each man an island unto himself. The contract still works, but the fact that the promise is written in auditable software rather than government-enforced English makes it less transparent, not more transparent. The same for the vote counting. Before blockchain can even get involved, you need to trust that voter registration is done fairly, that ballots are given only to eligible voters, that the votes are made anonymously rather than bought or intimidated, that the vote displayed by the balloting system is the same as the vote recorded and that no extra votes are given to the political cronies to cast. Blockchain makes none of these problems easier and many of them harder—more importantly, solving them in a blockchain context requires a set of awkward workarounds that undermine the core premise. So we know the entries are valid, let’s allow only trusted nonprofits to make entries—and you’re back at the good old “classic” ledger. In fact, if you look at any blockchain solution, inevitably you’ll find an awkward workaround to re-create trusted parties in a trustless world. A crypto-medieval system Yet absent these “old way” factors—supposing you actually attempted to rely on blockchain’s self-interest/self-protection to build a real system—you’d be in a real mess. Eight hundred years ago in Europe — with weak governments unable to enforce laws and trusted counterparties few, fragile and far between — theft was rampant, safe banking was a fantasy and personal security was at the point of the sword. This is what Somalia looks like now--and what it looks like to transact on the blockchain in the ideal scenario. Somalia on purpose. That’s the vision. Nobody wants it! Even the most die-hard crypto enthusiasts prefer in practice to rely on trust rather than their own crypto-medieval systems. 93% of bitcoins are mined by managed consortiums, yet none of the consortiums use smart contracts to manage payouts. Instead, they promise things like a “long history of stable and accurate payouts.” Sounds like a trustworthy middleman! See also: Collaborating for a Better Blockchain Same with Silk Road, a cryptocurrency-driven online drug bazaar. The key to Silk Road wasn’t the bitcoins (that was just to evade government detection), it was the reputation scores that allowed people to trust criminals. And the reputation scores weren’t tracked on a tamper-proof blockchain, they were tracked by a trusted middleman! If Ripple, Silk Road, Slush Pool and the DAO all prefer “old way” systems of creating and enforcing trust, it’s no wonder that the outside world had not adopted trustless systems either! In the name of all blockchain stands for, it’s time to abandon blockchain A decentralized, tamper-proof repository sounds like a great way to audit where your mango comes from, how fresh it is and whether it has been sprayed with pesticides. But actually, laws on food labeling, nonprofit or government inspectors, an independent, trusted free press, empowered workers who trust whistleblower protections, credible grocery stores, your local nonprofit farmer’s market and so on do a way better job. People who actually care about food safety do not adopt blockchain because trusted is better than trustless. Blockchain’s technology mess exposes its metaphor mess — a software engineer pointing out that storing the data as a sequence of small hashed files won’t get the mango pickers to accurately report whether they sprayed pesticides is also pointing out why peer-to-peer interaction with no regulations, norms, middlemen or trusted parties is actually a bad way to empower people. Like the farmer’s market or the organic labeling standard, so many real ideas are hiding in plain sight. Do you wish there was a type of financial institution that was secure and well-regulated in all the traditional ways, but also has the integrity of being people-powered? A credit union’s members elect its directors, and the transaction-processing revenue is divided up among the members. Move your money! Prefer a deflationary monetary policy? Central bankers are appointed by elected leaders. Want to make elections more secure and democratic? Help write open source voting software, go out and register voters or volunteer as an election observer here or abroad! Wish there was a trusted e-book delivery service that charged lower transaction fees and distributed more of the earnings to the authors? You can already consider stated payout rates when you buy music or books, buy directly from the authors or start your own e-book site that’s even better than what’s out there! Projects based on the elimination of trust have failed to capture customers’ interest because trust is actually so damn valuable. A lawless and mistrustful world where self-interest is the only principle and paranoia is the only source of safety is a not a paradise but a crypto-medieval hellhole. As a society, and as technologists and entrepreneurs in particular, we’re going to have to get good at cooperating — at building trust and at being trustworthy. Instead of directing resources to the elimination of trust, we should direct our resources to the creation of trust—whether we use a long series of sequentially hashed files as our storage medium or not.

Kai Stinchcombe

Profile picture for user KaiStinchcombe

Kai Stinchcombe

Kai Stinchcombe is cofounder and CEO of True Link Financial, a financial services firm focused on the diverse needs of today’s retirees: addressing longevity, long-term care costs, cognitive aging, investment, insurance and banking.

Using IoT to Monitor Risk in Real Time

Streaming of data readings allows operational predictions 20 times earlier. What if risk management saw similar improvements?

|
Although in ISO 31000, monitoring risk is a key tenet, I see little monitoring in most risk management systems. Periodic review, dashboards, heat maps and key risk indicator (KRI) reports are all review (a different ISO 31000 tenet), not monitoring. IoT technology can deliver real-time monitoring of risk for more than just physical environmental metrics. To monitor means to supervise and continually check and critically observe. It means to determine the current status and to assess whether the required or expected performance levels are actually being achieved. This is the fifth in the series on the Top 10 Disruptive Technologies that will transform risk management in the 2020s. This week, I look at how IoT technology can be extended to deliver real-time monitoring of risk for more than just physical environmental metrics. In my 2013 book “Mastering 21st Century Enterprise Risk Management,” I suggested “horizon scanning” as a method for monitoring risk and threats. With IoT, we have the opportunity to extend this from a series of discrete observations into continuous real-time monitoring. But let’s start with basics. What Is IoT – Intelligent Things? The IoT acronym for Internet of Things, like most IT acronyms, is meaningless, so it’s more recently being referred to as Intelligent Things, which is both more meaningful and allows for its expansion outside its original classification (I will come to that shortly). IoT technology is about collecting and processing continuous readings from wireless sensors embedded in operational equipment. These tiny electronics devices transmit their readings on heat, weight, counters, chemical content, flow rates, etc., to a nearby computer, referred to as at the “edge,” which does some basic classification and consolidation and then uploads the data to the “cloud,” where some specialist analytic system monitors those readings for anomalies. See also: Insurance and the Internet of Things   The benefits of IoT are already well-established in the fields of equipment maintenance and material processing (see Using Predictive Analytics in Risk Management). Deloitte found that predictive maintenance can reduce the time required to plan maintenance by 20% to 50%, increase equipment uptime and availability by 10% to 20% and reduce overall maintenance costs by 5% to 10%. Just as the advent of streaming video finally made watching movies online a reality, so streaming of data readings has produced a real paradigm shift in traditional metrics monitoring, including being able to make operational predictions up to 20 times earlier and with greater accuracy than traditional threshold-based monitoring systems. Think about it. What if we could achieve these sorts of improvement in risk management? Monitoring Risk Management in Real Time The real innovation from IoT is not from the hardware technology but from the software architecture built to process streaming IoT data. Traditionally, data was collected, then processed and analyzed. Like traditional risk management, it is historic and reactive. Traditional analytics used historical data to forecast what is likely to happen based on the historically set targets and thresholds, e.g. when a sensor hits a critical reading, a release valve would open to prevent overload. Processing and energy has already been expended (lost), and the cause still needs to be rectified. IoT technology continuously streams data and processes it in real time. Streaming analytics attempt to forecast what data is coming. Instead of initiating controls in reaction to what has happened, IoT steaming aims to alter inputs or the system to maintain optimum performance conditions. In an IoT system, inputs and processing are continually being adjusted base on the streaming analytics expectations of future readings. This technology will have its profound and transforming effect on risk management. When it migrates from being used to measure hardware environmental factors to software-based algorithms monitoring system processes and characteristics, we will be able to assess stresses and threats, both operational and behavioral. See also: Predictive Analytics: Now You See It…. In the 2020s, risk management will be heavily driven by KRI metrics, and as such will be a prime target for monitoring by streaming analytics. In addition to obvious environmental monitoring, streaming metrics could be used to monitor in real time staff stress and behavior, mistake (error) rates, satisfaction/complaint levels, process delays, etc. All change over time and can be adjusted in-process to prevent issues arising. In addition to existing general-purpose IoT platforms, such as Microsoft Azure IoT, IBM Watson IoT or Amazon AWS IoT, with the advent of “serverless apps” (this technology exists now), we will see an explosion in mobile apps available from public app stores to monitor every conceivable data flow, to which you will be able to subscribe and plug in to your individual data needs. We can then finally ditch the old reactive PDCA chestnut for the ROI method of process improvement and risk mitigation (see PDCA is NOT Best Practice).

Greg Carroll

Profile picture for user GregCarroll

Greg Carroll

Greg Carroll 
is the founder and technical director, Fast Track Australia. Carroll has 30 years’ experience addressing risk management systems in life-and-death environments like the Australian Department of Defence and the Victorian Infectious Diseases Laboratories, among others.

Digital Innovation in Life Insurance

The focus can move from life protection to life enablement, support that helps people lead a long and healthy life.

There is a view that the life insurance business needs to change or to innovate to remain relevant and reach new customers; if incumbent players don’t make an environment that appeals to new customers, someone else will step in and do it in their place. It’s possible that big data warehouses - or mobile and digital operators - will mass market life and health insurance protection to their customers, based on the data they have about them. While we can’t rule it out, this outcome is unlikely for multiple reasons. For a start, life insurance is not a simple transaction, and access to it is selected. It requires capital, distribution channels, underwriting and claims expertise, product knowledge and actuarial know-how. While these challenges may deter new entrants, they are not insurmountable obstacles. See also: Selling Life Insurance to Digital Consumers   Life and health insurance are closely linked to healthcare provisions; our products align with fundamental biometric events - including ill health, disability, disease and death. It’s no surprise we manitain a mainly medical approach to risk selection. Technology offers new ways to manage risk that rely less on face-to-face disclosure and traditional clinical assessments. This is why we see so much interest in understanding how innovation might work. Healthcare is adopting artificial intelligence, virtual reality, machine learning, sensors and other innovative technologies - including genomics - to deliver a more patient-centric approach. Insurance can similarly transform by adopting more customer-centric solutions. In healthcare, digital channels and mobile health solutions are welcomed when they blend with traditional methods and work simply. For insurers, this could mean placing innovation into spaces where it helps customers and where it makes sense to augment existing ways. See also: 2 Paths to a New Take on Digital   This means the focus is on how we engage with people and offer them services linked to ensuring their health. The focus can move from life protection to life enablement, support that helps people lead a long and healthy life. The industry needs the energy, innovative vision and technical skills of entrepreneurs. In turn, entrepreneurs need the network, customer base and data - as well as the insurance expertise - brought by insurers and reinsurers. Working together, we can create an environment where people will share their personal data, knowing we can be trusted to use it appropriately, to keep it safe and to have it be a force for good. To do this means examining the emerging digital options and working out how to optimize the benefits to our customers.