Download

When Will the Driverless Car Arrive?

A leading figure in the field predicts that self-driving car services will be available in certain communities within the next five years.

|||
When Chris Urmson talks about driverless cars, everyone should listen. This has been true throughout his career, but it is especially true now. Few have had better vantage points on the state of the art and the practical business and engineering challenges of building driverless cars. Urmson has been at the forefront for more than a decade, first as a leading researcher at CMU, then as longtime director of Google’s self-driving car (SDC) program and now as CEO of a driverless car dream team at Aurora Innovation. Urmson’s recent “Perspectives on Self-Driving Cars” lecture at Carnegie Mellon was particularly interesting because he has had time to absorb the lessons from his long tenure at Google and translate those into his next moves at Aurora. He was also in a thoughtful space at his alma mater, surrounded by mentors, colleagues and students. And, it is early enough in his new startup’s journey that he seemed truly in “perspective” rather than “pitch” mode. The entire presentation is worth watching. Here are six takeaways: 1. There is a lot more chaos on the road than most recognize.
Much of the carnage due to vehicle accidents is easy to measure. In 2015, in just the U.S., there were 35,092 killed and 2.4 million injured in 6.3 million police-reported vehicle accidents. Urmson estimates, however, that the real accident rate is really between two and 10 times greater.
Over more than two million test miles during his Google tenure, Google’s SDCs were involved in about 25 accidents. Most were not severe enough to warrant a regular police report (they were reported to the California DMV). The accidents mostly looked like this: “Self-driving car does something reasonable. Comes to a stop. Human crashes into it.” Fender bender results.
While we talk a lot about fatalities or police-reported accidents, Urmson said, “there is a lot of property damage and loss that can be cleaned up relatively easily” with driverless technology.
2. Human intent is the fundamental challenge for driverless cars.
The choices made by driverless cars are critically dependent on understanding and matching the expectations of human drivers. This includes both humans in operational control of the cars themselves and human drivers of other cars. For Urmson, the difficulty in doing this is “the heart of the problem” going forward.
To illustrate the “human factors” challenge, Urmson dissected three high-profile accidents. (He cautioned that, in the case of the Uber and Tesla crashes, he had no inside information and was piecing together what probably happened based on public information.)
[caption id="attachment_25868" align="alignnone" width="530"] Google Car Crashes With Bus; Santa Clara Transportation Authority[/caption] In the only accident where Google’s SDC was partially at fault, Google’s car was partially blocking the lane of a bus behind it (due to sand bags in its own lane). The car had to decide whether to wait for the bus to pass or merge fully into the lane. The car predicted that the remaining space in the bus’s lane was too narrow and that the bus driver would have to stop. The bus driver looked at the situation and thought “I can make it,” and didn’t stop. The car went. The bus did, too. Crunch. Uber's Arizona Rollover [caption id="attachment_25869" align="alignnone" width="530"] Uber Driverless Car Crashes In Tempe, AZ[/caption] The Uber SDC was in the leftmost lane of three lanes. The traffic in the two lanes to its right were stopped due to congested traffic. The Uber car’s lane was clear, so it continued to move at a good pace. A human driver wanted to turn left across the three lanes. The turning car pulled out in front of the cars in the two stopped lanes. The driver probably could not see across the blocked lanes to the Uber car’s lane and, given the stopped traffic, expected that whatever might be driving down that lane would be moving slower. It pulled into the Uber car’s lane to make the turn, and the result was a sideways parked car. See also: Who Is Leading in Driverless Cars?   Tesla's Deadly Florida Crash [caption id="attachment_25870" align="alignnone" width="530"] Tesla Car After Fatal Crash in Florida[/caption] The driver had been using Tesla’s Autopilot for a long time, and he trusted it—despite Tesla saying, “Don’t trust it.” Tesla user manuals told drivers to keep their hands on the wheel, eyes in front, etc. The vehicle was expecting that the driver was paying attention and would act as the safety check. The driver thought that Autopilot worked well enough on its own. A big truck pulled in front of the car. Autopilot did not see it. The driver did not intervene. Fatal crash. Tesla, to its credit, has made modifications to improve the car’s understanding about whether the driver is paying attention. To Urmson, however, the crash highlights the fundamental limitation of relying on human attentiveness as the safety mechanism against car inadequacies. 3. Incremental driver assistance systems will not evolve into driverless cars. Urmson characterized “one of the big open debates” in the driverless car world as between Tesla's (and other automakers’) vs. Google’s approach. The former’s approach is “let’s just keep on making incremental systems and, one day, we’ll turn around and have a self-driving car." The latter is “No, no, these are two distinct problems. We need to apply different technologies.” Urmson is still “fundamentally in the Google camp.” He believes there is a discrete step in the design space when you have to turn your back on human intervention and trust the car will not have anyone to take control. The incremental approach, he argues, will guide developers down a selection of technologies that will limit the ability to bridge over to fully driverless capabilities. 4. Don’t let the “Trolley Car Problem” make the perfect into the enemy of the great. The “trolley car problem” is a thought experiment that asks how driverless cars should handle no-win, life-threatening scenarios—such as when the only possible choices are between killing the car’s passenger or an innocent bystander. Some argue that driverless cars should not be allowed to make such decisions. Urmson, on the other hand, described this as an interesting philosophical problem that should not be driving the question of whether to bring the technology to market. To let it do so would be “to let the perfect be the enemy of the great.” Urmson offered a two-fold pragmatic approach to this ethical dilemma. First, cars should never get into such situations. “If you got there, you’ve screwed up.”  Driverless cars should be conservative, safety-first drivers that can anticipate and avoid such situations. “If you’re paying attention, they don’t just surprise and pop out at you,” he said. Second, if the eventuality arose, a car’s response should be predetermined and explicit. Tell consumers what to expect and let them make the choice. For example, tell consumers that the car will prefer the safety of pedestrians and will put passengers at risk to protect pedestrians. Such an explicit choice is better than what occurs with human drivers, Urmson argues, who react instinctually because there is not enough time to make any judgment at all. 5. The “mad rush” is justified. Urmson reminisced about the early days when he would talk to automakers and tier 1 suppliers about the Google program and he “literally got laughed at.”  A lot has changed in the last five years, and many of those skeptics have since invested billions in competing approaches. Urmson points to the interaction between automation, environmental standards, electric vehicles and ride sharing as the driving forces behind the rush toward driverless. (Read more about this virtuous cycle.) Is it justified? He thinks so, and points to one simple equation to support his position:
3 Trillion VMT * $0.10 per mile = $300B per year
In 2016, vehicles in the U.S. traveled about 3.2 trillion miles. If you could bring technology to bear to reduce the cost or increase the quality of those miles and charge 10 cents per mile, that would add up to $300 billion in annual revenue—just in the U.S. This equation, he points out, is driving the market infatuation with Transportation as a Service (TaaS) business models. The leading contenders in the emerging space, Uber, Lyft and Didi, have a combined market valuation of about $110 billion—roughly equal to the market value of GM, Ford and Chrysler. Urmson predicts that one of these clusters will see its market value double in the next four years. The race is to see who reaps this increased value. See also: 10 Questions That Reveal AI’s Limits   6. Deployment will happen “relatively quickly.” To the inevitable question of “when,” Urmson is very optimistic.  He predicts that self-driving car services will be available in certain communities within the next five years.
You won’t get them everywhere. You certainly are not going to get them in incredibly challenging weather or incredibly challenging cultural regions. But, you’ll see neighborhoods and communities where you’ll be able to call a car, get in it, and it will take you where you want to go.
(Based on recent Waymo announcements, Phoenix seems a likely candidate.) Then, over the next 20 years, Urmson believes we’ll see a large portion of the transportation infrastructure move over to automation. Urmson concluded his presentation by calling it an exciting time for roboticists. “It’s a pretty damn good time to be alive. We’re seeing fundamental transformations to the structure of labor and the structure transportation. To be a part of that and have a chance to be involved in it is exciting.”

Can You Leapfrog the Competition?

Once your organization jumps the gap, you’ll put distance between your organization and those that didn’t act on their knowledge.

In business, the gap between “knowing what to do” and “doing it” is of increasing concern. Why? Because in a world of rapid change, the gap between leaders and fast followers or laggards will at some point become insurmountable. The forces of change are shifting the status quo. New competitors are rising within and outside the insurance industry. Last month, Majesco published a research report, Strategic Priorities 2017 — Knowing vs. Doing, that highlighted how insurers are responding to changes in the marketplace. We followed that up with two blogs ­explaining the Knowing — Planning — Doing Gap, and how Habits Stifle Strategy. Today we are focusing on what’s really important…catching up or even leapfrogging! How do we close the gap between where we are and where we need to be to stay competitive? Recognize the gap. Seize the opportunity. Insurers are, at their core, risk averse. With today’s pace of change, however, the path of least risk will include taking some risks. The risk to invest in new business models, new products, new markets and new channels can, at minimum, keep insurers competitive. Even better, taking these risks could allow insurers to leapfrog the competition. Because the new competition does not play by the traditional rules of the past, insurers need to be a part of rewriting the rules for the future. There is less risk in a game where you write the rules. Are we acting upon our knowledge of the insurance industry, regulatory requirements and market trends to create a game that plays to our strengths in meeting changing customer and market needs? Or, are we simply educated observers waiting to see if it works, then follow? See also: A Manufacturing Risk: the Talent Gap   “Fail fast” is more than a technology, product, service or business model development mantra — it’s a directive to do ANYTHING that will place the organization out on the edge of change. A position of knowledgeable risk — risk with an opportunity for differentiation and growth — is the new normal for insurers. Ask your organization…is it riskier to jump into the gap with uncertainty about the potential of new ideas, or to sit still and accept the certainty of dramatic changes to the insurance industry as we know it? Bridge the gap in logical phases. To move from thinking to doing requires a new business paradigm in how we define and think about insurance in the digital age. Most organizations can’t simply flip off one switch (traditional business model and products administered on traditional systems) and flip another on (new business model and products on modern, flexible systems that will handle digital integration and better data acquisition and analysis). The shift is separating the insurance business models of the past 50-plus years that have been based on the business assumptions, products, processes, channels of the Silent and Baby Boomer generations from those of the next generation, the Millennials and Gen Z, as well as many in Gen X.  So, the shift will require steps that provide a bridge across a growing gap of pre- and post-digital age between leaders, fast followers and laggards. A paradigm shift in phases makes sense, so that business streams overlap each other. For example, we expect to see existing insurers and reinsurers increasingly looking for paths to create the business of the future and revenue growth, by capturing the next generation of customers with new engagement models, products and services. But while doing that, they must fund the future by transforming and optimizing today’s business and the current customers that they have grown over the past decade. As they rethink their business models and realign them with the customer needs and expectations of those who will be their customers for the next 10 to 20 years, they will logically still be catering to their loyal customers from the past 50-plus years. This will require insurers to know, plan and execute across these three paths:
  1. Keep and grow the existing business, while transforming and building the new business.
  2. Optimize the existing business while building the new business.
  3. Develop a new business model for a new generation of buyers.
These three focal points are critical steps in a world of change and disruption. A new generation of insur­ance buyers with new needs and expectations creates both a challenge and an opportunity. Those who recognize and rapidly respond to this shift will thrive in an increasingly competitive industry to become the new leaders of a re-imagined insurance business. Act. Right now. Close the gap. Not every insurance company will be successful in this new world of new customer risks and expectations, ever-advancing technology, data and analytics capabilities and expanding and blurring market boundaries. But if you are determined that your company will succeed, you must act now to start closing the most critical gaps between what your company knows and what it is doing in response to that knowledge. See also: A Gap That Could Lead to Irrelevance   Insurance companies must stop talking and start doing. We are entering a new age of insurance underpinned by a seismic shift creating leaps in innovation and disruption, challenging the traditional business assumptions, operations, processes and products of the last 30 to 50 years. The challengers, such as Lemonade, Splice, TROV, Haven Life, Root, Next Insurance and others, are bucking the status quo and introducing new business models, products, processes, channels and experiences for the future. Will they all succeed? Maybe not. But they will alter the landscape, just as others have in the past or in other industries, leaving companies who did not change in their wake. The implications for insurers are enormous. The gap between knowing and doing is putting insurers at significant risk. It is allowing them to fall further behind, making them irrelevant … and potentially extinct. On the flip side, once your organization jumps the gap, you’ll be in the enviable position of putting distance between your organization and those that didn’t act on their knowledge. Following a road map (similar to the one outlined in the three steps above) will bring your organization to a place where it will be prepared to capture growth and gain the agility to move in new markets. Closing the gap is a journey that begins with a first step of action. Take that step now!

Denise Garth

Profile picture for user DeniseGarth

Denise Garth

Denise Garth is senior vice president, strategic marketing, responsible for leading marketing, industry relations and innovation in support of Majesco's client-centric strategy.

Can Apps Manage Mental Health?

Smartphone apps are perfect for, say, detecting depression by watching for a fall in exercise and movement and fewer social interactions.

sixthings
Improved awareness and recognition of mental health problems and their complexity puts pressure on health systems to increase care. In turn, this stimulates exploration of the potential value of software applications (apps) run on mobile devices. The ubiquity of smartphones makes them an ideal tool for apps that can help individuals manage mental health. Apps create long-term patient health data in a way episodic clinic consultations cannot and generate a personal health record fundamentally different from a clinical patient record. Doctors have much to gain from the gaps in information being filled by continuous monitoring in this way. Insurers also can benefit from the potential of this technology, especially for claims. Health apps used on mobile devices can monitor physiological cues associated with sleep disturbance, anxiety, depression, phobias and psychosis. For example, depression is associated with a fall in activity levels – less exercise, movement and fewer social interactions. Sensors in smartphones can help spot patterns of altered behavior that may represent the early warning signs of lowered mood. See also: New Approach to Mental Health   Some apps help diagnose problems. Others help people track and manage mood using self-assessment techniques augmented by coaching functionality. Online environments are a gateway to support from more specialist clinical resources. The resources allow patients more control of their mental health management while enabling clinicians to monitor and support them remotely. (Read my blog, “The Growing Impact of Wearables on Digital Health and Insurance.”) Apps can also help with treatment by sending reminders about medication or appointments, regardless of the person’s location. And they can provide distraction from cravings or link with social networks at times of stress. This “nudging” is effective at altering behavior; for example, integrating text messaging in smoking cessation programs improved six-month cessation rates by 71% compared with the regular treatment. However, work remains to be done before apps can integrate with insurers' processes. The confidentiality and use of personal data generated and stored by apps is complicated and needs clarification. The accuracy and sufficiency of information is a potential concern, and hardware constraints may limit potential. More evaluation of the impact of digital technology is needed in research and clinical practice. See also: Not Your Mama’s Recipe for Healthcare   Meanwhile, insurers could engage with emerging providers of software solutions. Services like these will, over a relatively short time, become highly influential in the lives of people living with mental health problems. Pilot schemes that compare current insurance methods while evaluating new ones would take us one big step forward.

Opportunities in the Sharing Economy

The sharing economy is exposing situations in which new liabilities need coverage. Many are not covered by standard insurance policies.

sixthings
Companies such as Uber, WeGoLook, and Airbnb are taking the sharing economy to the next level by allowing people to earn money from assets they already own, and the spare time that they have. In fact, Airbnb, the short-term property rental company, saw sales of $900 million in 2015. Airbnb was also valued at an impressive $24 billion in the same year. For a company that owns zero properties, that's impressive! All of these changes are bringing a lot of questions for insurance. This is primarily due to the short-term nature of the sharing economy. See also: What to Learn From Sharing Economy   The Sharing Economy and Insurance The sharing economy is clearly fulfilling a number of consumer demands. That said, it is also exposing situations in which new liabilities need coverage. Many of these new risks are not covered by standard insurance policies. For example, people who rent out their homes for Airbnb purposes generally do not have the associated risks covered under a typical homeowner’s policy. Airbnb does offer insurance coverage up to $1 million. While this is a good start, the coverage may not be enough for very expensive homes, in situations such as a fire occurring in an urban area and spreading to other properties, or even where an unfortunate medical emergency or even death occurs. There are also other issues with Airbnb insurance, including the fact that it does not provide coverage if a guest shows up early or stays late. This can potentially be disastrous. People who use other assets such as their cars for business purposes may also find that their standard policies do not provide coverage. Ride-sharing companies such as Uber do provide some coverage for their drivers. However, this coverage may not be enough in many cases for drivers to have all of their claims fully covered. Opportunity for Insurers Because the sharing economy is growing rapidly and consumers are embracing it, there is a major opportunity for insurance companies. The millions of people who share assets in the sharing economy need adequate insurance solutions to help them cover their risks. Insurance companies can capitalize on this need by creating insurance products for entrepreneurs who provide on-the-go and on-demand services in the sharing economy. And many already have! One insurance company that is moving full steam ahead in the sharing economy insurance market is Slice. Slice provides “pay per use” insurance policies that cater to sharing economy workers. In 2016, Slice received $3.9 million of funding as a reward for this innovative business model. Companies such as Lemonade are taking it one step further and even offering insurance as a shared asset through a peer-to-peer model. Lemonade has its users pay a flat service fee and payouts for claims come from premiums paid by networks of friends. This prevents Lemonade from having incentives to reduce the amount of payouts made. It is a revolutionary approach to insurance! See also: Sharing Economy: The Concept of Trust   Final Thoughts Considering that the sharing economy is expected to grow to $355 billion by 2025, it is safe to say that the sharing trend is here to stay. This means that it is wise for insurers to get on board and to start accommodating it sooner rather than later. Slice and Lemonade are two examples of companies that are already attempting to gain strong market share in the sharing economy. However, even though these companies are gaining traction, there is still likely to be substantial opportunity for any insurer that can help to provide insurance for the sharing economy. It appears that the time has come for the insurance industry to adapt and change to accommodate the consumer demand for sharing economy-centric policies.

Robin Roberson

Profile picture for user RobinSmith

Robin Roberson

Robin Roberson is the managing director of North America for Claim Central, a pioneer in claims fulfillment technology with an open two-sided ecosystem. As previous CEO and co-founder of WeGoLook, she grew the business to over 45,000 global independent contractors.

Aggressive Regulation on Data Breaches

The FTC appears to be taking preemptive measures against a company making IoT devices, not waiting for a cyberattack to occur first.

Below is an excerpt from John Farley’s new book: "Online and Under Attack: What Every Business Needs To Do Now To Manage Cyber Risk and Win Its Cyber War." The Internet of Things Every one of us lives in a brave new connected world. For most of us, our first foray into the online world occurred at work, as business discovered the internet provided a means to efficiencies that made them more competitive. The convenience of the internet has spilled over in dramatic fashion into our personal lives. The average home contains 13 internet-connected devices, and that number is growing fast. It has given birth to the term we know today as the Internet of Things (IoT). According to the FTC’s 2015 staff report “Internet of Things: Privacy and Security in an Interconnected World,” the number of internet-connected devices surpassed the number of people living on the earth several years ago. As of 2015, there were an estimated 25 internet-connected devices. The FTC estimates that this number will double to 50 billion by 2020. Consumers love the convenience that these products bring, and manufacturers recognize this. There has been a tremendous rush to the market, as everything from security cameras, DVRs, routers, TVs, cars, thermostats and children’s toys are being designed to connect to the internet. The list grows daily. Unfortunately, recent history has shown that as manufacturers hurry to capture their share of the market for these devices, many have ignored the concept of security at the design stage. Instead, the focus was to get products manufactured quickly and economically. Extra steps in the product design stage, such as addressing security, would likely increase design time, make them more difficult for the consumer to set up and ultimately increase cost. As a result, many products in our homes lack basic cybersecurity controls and are subject to online threats as demonstrated earlier in this book in the Dynamic Network Systems attack in October 2016. Many products come with easily guessed passwords or none at all. When security flaws are recognized by manufacturers, they are often not easily patchable. See also: Firms Ally to Respond to Data Breaches   The FTC has taken notice and made its concerns heard in January 2017 by filing a lawsuit against Taiwanese D-Link and its U.S. subsidiary, D-Link Systems. In the complaint, the FTC alleges the company made deceptive claims about the security of its products and engaged in unfair practices that put U.S. consumers’ privacy at risk. D-Link sells networking equipment that integrates consumers’ home networks, such as routers, internet protocol (IP) cameras, baby monitors and home security cameras. These devices allow consumers to do things like monitor their homes and children in real time. Consumers simply access the live feeds from their home cameras using their mobile devices or any computer. The crux of the lawsuit alleges that D-Link failed to protect consumers from “widely known and reasonably foreseeable risks of unauthorized access.” There are several allegations made by the FTC where it alleges D-Link failed to do the following:
  • Take reasonable software testing and remediation measures to protect its routers and IP cameras against well-known and easily preventable software security flaws that would potentially allow remote attackers to gain control of consumers’ devices.
  • Take reasonable steps to maintain the confidentiality of the “signature” key that D-Link used, which resulted in the exposure of the private key on a public website for approximately six months.
  • Use free software, available since at least 2008, to secure users’ mobile app login credentials, instead storing those credentials in clear, readable text on users’ mobile devices.
The case is especially noteworthy because it is not alleging a known breach of security in D-Link devices. Instead, the FTC appears to be taking measures against the company, and not waiting for a successful cyberattack to occur before acting. So we may refer back to the FTC 2015 staff report “Internet of Things: Privacy and Security in an Interconnected World” for guidance. In that report, the following recommendations are made by the FTC:
  • Build security measures into devices from the outset and at every stage of development—don’t wait to implement retroactive security measures after the devices have already been produced and sold.
  • Consistently maintain up-to-date software to secure consumer personal information, and ensure regular software testing. Any identified vulnerabilities should be remediated promptly; connected devices should be monitored throughout their life cycles; and security patches should be issued to cover known risks.
  • Take steps to implement reasonable access-control measures for IoT devices, including making sure proprietary device signatures remain confidential.
  • Accurately describe the products’ safety and security features in marketing and promotional materials.
See also: Data Breach Law Could Hurt Consumer

John Farley

Profile picture for user JohnFarley

John Farley

John Farley is a vice president and cyber risk consulting practice leader for HUB International's risk services division. HUB International is a North American insurance brokerage that provides an array of property and casualty, life and health, employee benefits, reinsurance, investment and risk management products and services.

The 6 Principles of Persuasion

Why do you buy a product or pay for a service? What motivates your customers to say “yes” to what you are offering?

sixthings
Why do you buy a product or pay for a service? What motivates your customers to say “yes” to what you are offering? Have you ever thought about it, really? The list in your mind is probably endless, but do you think it has anything to do with persuasion? Yes, persuasion. For a number of years many companies have persuaded us (the public) to buy their products or try their service using some very catchy ads like: Proctor and Gamble's “Thank you, Mom” campaign; Screen Shot 2016-11-28 at 9.48.27 PM The ever-so-catchy “Every Kiss Begins with Kay” that’s helped the jeweler sell loads of diamonds; and Screen Shot 2016-11-28 at 9.48.57 PM My local favorite, Digicel, “The Bigger, Better Network.” Screen Shot 2016-11-28 at 9.49.29 PM A lot of companies understand the science behind what makes you say “yes,” and you can thank Dr. Robert Cialdini for it. In his book ,“Influence: The Psychology of Persuasion.” Dr. Cialdini showed that people do what they observe other people doing. It’s a principle that’s based on the idea of safety in numbers. For example, when I am feeling for a good doubles (a sandwich sold on the street that those of you not from Trinidad and Tobago are missing out on), I will automatically gravitate to the doubles man who has a lot of people around him. I will be very cautious of someone selling doubles who has just a few people buying. But that is the science of social proof. If a group of people is looking to the back of the elevator, an individual who enters the elevator will copy it and do the same, even if it looks funny. Companies use this all the time. Anyone shopping on Amazon can read tons of customer feedback on any product. Some companies show their Facebook likes and Twitter followers. Whether we admit it or not, most of us are impressed when someone has a ton of subscribers, Twitter followers, YouTube views, blog reviews, etc. Calidini's six principles of persuasion (which are very similar to mine, even though I didn't know who he was until a month ago) are:
  1. Reciprocity
  2. Commitment and consistency
  3. Social proof
  4. Likability
  5. Authority
  6. Scarcity
If you are wondering if these principles are still relevant after almost 30 years, yes, they are. As a matter of fact, these principles are the foundation for many marketing campaigns, and many companies use them to get you to buy their product or service. Most people can’t explain why they made a particular decision. But Dr. Cialdini can. After countless experiments and research, Dr. Cialdini identified those six underlying factors that influence decisions and explained how to use the factors to get more positive responses. Let look at the factors and their applications individually in a business context: Reciprocity According to Dr. Cialdini, reciprocation explains why free samples can be so effective. People feel indebted to those who do something for them or give them a gift. People who receive an unexpected gift are more likely to listen to a product’s features, donate to a cause or tip a waiter more. Give something — information, samples, a positive experience, etc. — and people will want to give you something in return. A lot of companies have adopted this principle of reciprocity. Netflix, Amazon and Hubspot all offer a free service for a stipulated period of time. And some bloggers offers free downloads, free webinars, free ebooks. These companies and individuals understand that human beings are wired to return favors and, as a result, site visitors will be more likely to feel obligated to buy something from the company or individual's website. Commitment and Consistency People take a lot of pride in being true to their word. Dr. Cialdini suggests that oral and written commitments are powerful persuasive techniques and that people tend to honor agreements — even after the original incentive or motivation is no longer present. Cialdini indicated that people want to be consistent and true to their word. Getting customers or co-workers to publicly commit to something makes them more likely to follow through with an action or a purchase. Getting people to answer “yes” makes them more powerfully committed to an action. Conversion Voodoo helped a mortgage company increase its completed application conversion rate by more than 11% with the simple addition of a commitment checkbox. That simple act of commitment propels the mortgage company's customers toward making a larger commitment. Social Proof We dealt with social proof above. People will normally follow the crowd (safety in numbers). Likability Dr. Cialdini explained that likability is based on sharing something similar with people you like. People will naturally associate with people who are like them, and this applies to businesses as well. Customers tend to buy from companies they like. Everyone has a favorite brand that appeals to them — the more similarities there are between the customer and brand, the more positive that relationship will be over time. A lot of companies conduct extensive research to segment their market, target their niche and position the company to appeal to its target market. These companies design their products, services, logos, websites, outlets, etc. to mirror their customers. We are influenced by a product or service we like. See also: How Customers Really Think About Insurance   Likability may also come in the form of trust. Being fair, open, genuine and honest in your actions and having a general interest in people and their welfare will begin to build that trust with your staff, which is one of the branches of likability and respect. Authority Are you more likely to take instruction from a person who you perceived to be an authoritative person? According to Dr. Cialdini, job titles such as “doctor” can infuse an air of authority and, as a result, this can lead the average person to accept what a person is saying without question. If you take LinkedIn influencers, for example, their posts attract thousands of views and comments simply because people considers the influencers to be people of authority in their field because of their success. According to Dr. Cialdini, “When people are uncertain, they don’t look inside themselves for answers — all they see is ambiguity and their own lack of confidence. Instead, they look outside for sources of information that can reduce their uncertainty. The first thing they look to is an authority. We’re not talking about being in authority but about being an authority.” Nike is one of the most coveted brands in the world, and one of their major strengths is their association with very successful athletes who are considered an authority in their sport. So, quite naturally with that association, Nike has become an authoritative brand in the world of sports apparel. Scarcity In economic terms, “scarcity” relates to supply and demand. The less there is of something, the more valuable it is. According to Dr. Cialdini, the more rare and uncommon something is, the more people want it.  For example, a lot of companies use phrases like “Don’t miss this chance,” or, “Book your spots early; Limited seating available.” Many companies may manufacture a limited amount of a product in an attempt to generate a sense of limitation to the general public. Have you ever notice the long lines for a new product? People camping outside a store? If you create that environment of scarcity, you will create a demand for your product or service. See also: How to Exceed Customer Expectations   The six principles I've mentioned are very powerful simply because they bypass our rational mind and appeal to our subconscious instincts. A good seller will always refer to the positive opinion of other users and how successful the product is. Or the seller will give customers a free trial, etc. But it is important to note that if you are unethical and are trying to con your customers, people will see right through your scam. These principles will only be effective if you are genuine in your efforts and you deliver on your promise to your customers. To gain more insight into the use of persuasion, you can secure a copy of my Kindle eBook, “The 6 Principles of Persuasion Everyone in Business Should Know, Release the Trigger of Compliance in Your Staff and Customers.” You will learn influential strategies that many successful companies use to increase their sales, attract more customers, manage their employees more effectively and communicate to influence others.

When Big Data Can Define Pricing (Part 2)

Algorithms have been developed and are moving from a proof-of-concept phase in academia to implementations in insurance firms.

sixthings
This is the second part of a two-part series. The first part can be found here.  Abstract In the second part of this article, we extend the discourse to a notional micro-economy and examine the impact of diversification and insurance big data components on the potential for developing strategies for sustainable and economical insurance policy underwriting. We review concepts of parallel and distributed algorithmic computing for big data clustering, mapping and resource reducing algorithms.   1.0 Theoretical Expansion to a Single Firm Micro-Economy Case We expand the discourse from part one to a simple theoretical micro-economy, and examine if the same principles derived for the aggregate umbrella insurance product still hold on the larger scale of an insurance firm. In a notional economy with {1…to…N} insurance risks r1,N and policy holders respectively, we have only one insurance firm, which at time T, does not have an information data set θT about dependencies among per-risk losses. Each premium is estimated by the traditional standard deviation principle in (1.1). For the same time period T the insurance firm collects a total premium πT[total] equal to the linear sum of all {1…to…N} policy premiums πT[rN] in the notional economy. There is full additivity in portfolio premiums, and because of unavailability of data on inter-risk dependencies for modeling, the insurance firm cannot take advantage of competitive premium cost savings due to market share scale and geographical distribution and diversification of the risks in its book of business. For coherence we assume that all insurance risks and policies belong to the same line of business and cover the same insured natural peril - flood, so that the only insurance risks diversification possible is due to insurance risk independence derived from geo-spatial distances. A full premium additivity equation similar to an aggregate umbrella product premium (3.0), extended for the case of the total premium of the insurance firm in our micro-economy, is composed in (9.0) In the next time period T+1 the insurance firm acquires a data set θT+1 which allows it to model geo-spatial dependencies among risks and to identify fully dependent, partially dependent and fully independent risks. The dependence structure is expressed and summarized in a [NxN] correlation matrix - ρi,N. Traditionally, full independence between any two risks is modeled with a zero correlation factor, and partial dependence is modeled by a correlation factor less than one. With this new information we can extend the insurance product expression (7.0) to the total accumulated premium πT+1[total] of the insurance firm at time T+1 The impacts of full independence and partial dependence, which are inevitably present in a full insurance book of business, guarantee that the sub-additivity principle for premium accumulation comes into effect. In our case study sub-additivity has two related expressions. Between the two time periods the acquisition of the dependence data set θT which is used for modeling and definition of the correlation structure ρi,N provides that a temporal sub-additivity or inequality between the total premiums of the insurance firm can be justified in (10.1). It is undesirable for any insurance firm to seek lowering its total cumulative premium intentionally because of reliance on diversification. However an underwriting guidelines’ implication could be that after the total firm premium is accumulated with a model taking account of inter-risk dependencies, then this total monetary amount can be back-allocated to individual risks and policies and thus provide a sustainable competitive edge in pricing. The business function of diversification and taking advantage of its consequent premium cost savings is achieved through two statistical operations: accumulating pure flood premium with a correlation structure, and then back-allocating the total firms’ premium down to single contributing risk granularity. A backwardation relationship for the back-allocated single risk and single policy premium π'T+1[rN] can be derived with a standard deviations’ proportional ratio. This per-risk back-allocation ratio is constructed from the single risk standard deviation of expected loss σT+1[rN] and the total linear sum of all per-risk standard deviations  in the insurance firm’s book of business. From the temporal sub-additivity inequality between total firm premiums in (10.1) and the back-allocation process for total premium  down to single risk premium in (11.0), it is evident that there are economies of scale and cost in insurance policy underwriting between the two time periods for any arbitrary single risk rN. These cost savings are expressed in (12.0). In our case study of a micro economy and one notional insurance firms’ portfolio of one insured peril, namely flood, these economies of premium cost are driven by geo-spatial diversification among the insured risks. We support this theoretical discourse with a numerical study. 2.0 Notional Flood Insurance Portfolio Case Study We construct two notional business units each containing ten risks, and respectively ten insurance policies. The risks in both units are geo-spatially clustered in high intensity flood zones – Jersey City in New Jersey – ‘Unit NJ’ and Baton Rouge in Louisiana – ‘Unit BR’. For each business unit we perform two numerical computations for premium accumulation under two dependence regimes. Each unit’s accumulated fully dependent premium is computed by equation (9.0). Each unit’s accumulated partially dependent premium, modeled with a constant correlation factor of 0.6 (60%), between any two risks, for both units is computed by equation (10.0). The total insurance firm’s premium under both cases of full dependencies and partial dependence is simply a linear sum – ‘business unit premiums’ roll up to the book total. In all of our case studies we have focused continuously on the impact of measuring geo-spatial dependencies and their interpretation and usability in risk and premium diversification. For the actuarial task of premium accumulation across business units, we assume that the insurance firm will simply roll - up unit total premiums, and will not look for competitive pricing as a result of diversification across business units. This practice is justified by underwriting and pricing guidelines being managed somewhat autonomously by geo-admin business unit, and premium and financial reporting being done in the same manner. In our numerical case study we prove that the theoretical inequality (10.1), which defines temporal subadditivity of premium with and without dependence modeled impact is maintained. Total business unit premium computed without modeled correlation data and under assumption of full dependence  always exceeds the unit’s premium under partial dependence computed with acquired and modeled correlation factors. This justifies performing back-allocation in both business units, using procedure (11.0), of the total premium  computed under partial dependence. In this way competitive cost savings can be distributed down to single risk premium. In table 4, we show the results of this back-allocation procedure for all single risks in both business units:   For each single risk we observe that the per-risk premium inequality (12.0) is maintained by the numerical results. Partial dependence, which can be viewed as the statistical – modeling expression of imperfect insurance risk diversification proves that it could lead to opportunities for competitive premium pricing and premium cost savings for the insured on a per-risk and per-policy cost savings. 3.0 Functions and Algorithms for Insurance Data Components 3.1 Definition of Insurance Big Data Components Large insurance data component facilitate and practically enable the actuarial and statistical tasks of measuring dependencies, modeled loss accumulations and back-allocation of total business unit premium to single risk policies. For this study our definition of big insurance data components covers historical and modeled data at high geospatial granularity, structured in up to one million simulation maps. For modeling of a single (re)insurance product a single map can contain a few hundred historical, modeled, physical measure data points. At the large book of business or portfolio simulation, one map may contain millions of such data points. Time complexity is another feature of big data. Global but structured and distributed data sets are updates asynchronously and oftentimes without a schedule, depending on scientific and business requirements and computational resources. Thus such big data components have a critical and indispensable role in defining competitive premium cost savings for the insureds, which otherwise may not be found sustainable by the policy underwriters and the insurance firm. 3.2 Intersections of Exposure, Physical and Modeled Simulated data sets Fast compute and big data platforms are designed to provide various geospatial modeling and analysis tasks. A fundamental task is the projection of an insured exposure map and computing its intersection with multiple simulated stochastic flood intensity maps and geo-physical properties maps containing coastal and river banks elevations and distances to water bodies. This particular algorithm performs spatial cashing and indexing of all latitude and longitude geo-coded units and grid-cells with insured risk exposure and modeled stochastic flood intensity. Geo-spatial interpolation is also employed to compute and adjust peril intensities to distances and geo-physical elevations of the insured risks. 3.3 Reduction and Optimization through Mapping and Parallelism One relevant definition of Big Data to our own study is datasets that are too large and too complex to be processed by traditional technologies and algorithms. In principle moving data is the most computationally expensive task in solving big geo-spatial scale problems, such as modeling and measuring inter-risk dependencies and diversification in an insurance portfolio. The cost and expense of big geo-spatial solutions is magnified by large geo-spatial data sets typically being distributed across multiple hard physical computational environments as a result of their size and structure. The solution is distributed optimization, which is achieved by a sequence of algorithms. As a first step a mapping and splitting algorithm will divide large data sets into sub-sets and perform statistical and modeling computations on the smaller sub-sets. In our computational case study the smaller data chunks represent insurance risks and policies in geo-physically dependent zones, such as river basins and coastal segments. The smaller data sets are processed as smaller sub-problems in parallel by assigned appropriate computational resources. In our model we solve smaller scale and chunked data sets computations for flood intensity and then for modeling and estimating of fully simulated and probabilistic insurance loss. Once the cost effective sub-set operations are complete on the smaller sub-sets, a second algorithm will collect and map together the results of the first stage compute for consequent operations for data analytics and presentation. For single insurance products, business units and portfolios an ordered accumulation of risks is achieved via mapping by scale of the strength or lack thereof their dependencies. Data sets and tasks with identical characteristics could be grouped together and resources for their processing significantly reduced by avoiding replication or repetition of computational tasks, which we have already mapped and now can be reused. The stored post-analytics, post-processed data could also be distributed on different physical storage capacities by a secondary scheduling algorithm, which intelligently allocates chunks of modeled and post-processed data to available storage resources. This family of techniques is generally known as MapReduce. 3.4 Scheduling and Synchronization by Service Chaining Distributed and service chaining algorithms process geo-spatial analysis tasks on data components simultaneously and automatically. For logically independent processes, such as computing intensities or losses on uncorrelated iterations of a simulation, service chaining algorithms will divide and manage the tasks among separate computing resources. Dependencies and correlations among such data chunks may not exist because of large geo-spatial distances, as we saw in the modeling and pricing of our cases studies. Hence they do not have to be accounted for computationally and performance improvements are gained. For such cases both input data and computational tasks can be broken down to pieces and sub-tasks respectively. For logically inter-dependent tasks, such as accumulations of inter-dependent quantities such as losses in geographic proximity, chaining algorithms automatically order the start and completion of dependent sub-tasks. In our modeled scenarios, the simulated loss distributions of risks in immediate proximity are accumulated first, where dependencies are expected to be strongest. A second tier of accumulations for risks with partial dependence and full independence measures is scheduled for once the first tier of accumulations of highly dependent risks is complete. Service chaining methodologies work in collaboration with auto-scaling memory algorithms, which provide or remove computational memory resources, depending on the intensity of modeling and statistical tasks. Challenges still are significant in processing shared data structures. An insurance risk management example, which we are currently developing for our a next working paper, would be pricing a complex multi-tiered product, comprised of many geo-spatially dependent risks, and then back-allocating a risk metric, such as tail value at risk down to single risk granularity. On the statistical level this back-allocation and risk management task involves a process called de-convolution or also component convolution. A computational and optimization challenge is present when highly dependent and logically connected statistical operations are performed with chunks of data distributed across different hard storage resources. Solutions are being developed for multi-threaded implementations of map-reduce algorithms, which address such computationally intensive tasks. In such procedures the mapping is done by task definition and not directly onto the raw and static data. Some Conclusions and Further Work With advances in computational methodologies for natural catastrophe and insurance portfolio modeling, practitioners are producing increasingly larger data sets. Simultaneously single product and portfolio optimization techniques are used in insurance premium underwriting, which take advantage of metrics in diversification and inter-risk dependencies. Such optimization techniques significantly increase the frequency of production of insurance underwriting data, and require new types of algorithms, which can process multiple large, distributed and frequently updated sets. Such algorithms have been developed theoretically and now they are entering from a proof of concept phase in the academic environments to implementations in production in the modeling and computational systems of insurance firms. Both traditional statistical modeling methodologies such as premium pricing, and new advances in definition of inter-risk variance-covariance and correlation matrices and policy and portfolio accumulation principles, require significant data management and computational resources to account for the effects of dependencies and diversification. Accounting for these effects allows the insurance firm to support cost savings in premium value for the insurance policy holders. With many of the reviewed advances at present, there are still open areas for research in statistical modeling, single product pricing and portfolio accumulation and their supporting optimal big insurance data structures and algorithms. Algorithmic communication and synchronization cost between global but distributed structured and dependent data is expensive. Optimizing and reducing computational processing cost for data analytics is a top priority for both scientists and practitioners. Optimal partitioning and clustering of data, and particularly so of geospatial images, is one other active area of research.

Ivelin M. Zvezdov

Profile picture for user IvelinZvezdov

Ivelin M. Zvezdov

Ivelin Zvezdov is a financial economist by training with experience in quantitative analysis and risk management for (re)insurance and natural catastrophe modeling, fixed income and commodities trading. Since 2013 he leads the product development effort of AIR Worldwide's next generation modeling platform.

It's Time to Go on the Offensive

After years of cost-cutting and downsizing, companies have realized they can’t shrink their way to success.

sixthings

Creating businesses is the challenge of the day for large organizations. After years of cost-cutting and downsizing, companies have realized they can’t shrink their way to success.

In a world where what’s possible is advancing at breakneck speeds, social behavior, technology and global economy are driving forces for change. Established brands have realized they can’t stay relevant, differentiate themselves or gain a competitive advantage by tweaking aging product portfolios, buying out rivals or expanding to developing nations.

Innovation is crucial now more than ever, so companies must become Janus-like — looking in two directions at once, with one face focused on the old that still accounts for the bulk of their revenue and the other seeking out the new.

Innovation brings the hope of new value and the fear of the unknown. It is often born at the fringes of an organization’s established divisions and, at times, it exists in the spaces between. The truth is that innovation is a messy business. The high levels of uncertainty associated with new ventures need adaptive organizational structures to succeed. A company's operating, financial and governance models are seldom the same as existing businesses. In fact, most new business models are not fully defined in the beginning; they become clearer as new strategies are tried, customer needs are understood and anticipated and new applications are developed to facilitate new experiences. This uncertainty results in half-baked superficial changes that happen at the edge because it is easiest there, that require minimal organizational effort and that get the most visibility. Launching innovation labs, incubators or venture units requires a few bodies on the ground in a trendy office — even if they don’t produce much tangible value after the post-launch media hype wears off.

See also: Secret Sauce for New Business Models?  

Crossing the threshold to innovate is imperative, but transitions from the current tried-and-tested state to the new state with unfamiliar rules and values is daunting for most people. It takes clarity of vision to create momentum and inspire others. Above all, it’s a balancing act between the old and the new cultures that are often placed in conflict with one another if the company takes an either or approach to corporate entrepreneurship.

Even when a breakthrough innovation is ready to be implemented, delivery becomes impossible in this corporate environment. Most leaders find there’s a fine line between corporate entrepreneurship and insubordination.

I get asked by CEOs and heads of departments how we solve these problems. How do we make a real impact with consensus and harmony? I suggest a new approach is called for, one that blends these cultures to avoid extreme behavior and creates equilibrium in areas of strategy, operations and organization. We have only to look at any successful enterprise such as Apple, Uber or Netflix, and we’ll find innovation at its core. These companies are bold about taking risks, driving change for the better and doing it at scale through human-centered design. This understanding and building a collaborative culture to actively seek out solutions to challenging problems and identifying relevant strategies continues to expand the realm of the possible.


Shahzadi Jehangir

Profile picture for user ShahzadiJehangir

Shahzadi Jehangir

Shahzadi Jehangir is an innovation leader and expert in building trust and value in the digital age, creating scalable new businesses generating millions of dollars in revenue each year, with more than $10 million last year alone.

Unlock value of insurance data from paper constraints

sixthings

Who knew proof of insurance could be a major news story? I suppose you could frame the question as, How many more ways can Dwight Howard get fans mad at him? But I prefer the insurance angle.

The story goes like this:

The 31-year-old center for the Atlantic Hawks was pulled over a bit after 2 in the morning on April 28 for driving 95mph in a 65mph zone about 10 miles from his home. Police found that he was driving his Audi Rs7 with a suspended registration and without proof of insurance. They let him off with a verbal warning for the speeding and the suspended registration—one of the perks of being an NBA star, I suppose—but towed his car because he couldn't prove his claim that he had insurance.

The incident might have stayed under the radar without the towing but popped up on sports sites this week and caused a stir among fans. Why was Howard out at 2 in the morning on the day of a playoff game? Was his late night the reason he played poorly later that day in a game that eliminated the Hawks from the playoffs? The car towing fed into the narrative about Howard, a supremely talented player who has never lived up to expectations, especially in the playoffs, and is now with his fourth team. Atlanta fans are outraged, while fans of his three previous teams are chuckling and saying, "Told you so."

All because Howard couldn't produce proof of insurance.

Did he have insurance? He certainly can afford it. He earns more than $23 million a year and has been making that kind of money for a long time now. But we don't know for sure, because of the archaic systems we use that mean most of us carry proof of insurance as little pieces of paper in our cars. 

At its core, insurance is as digital as any industry there is—we basically track a whole lot of data on people, curate a mass of very precise promises and wire money—so it strikes me as odd that we turn the data into paper and PDFs and handle them manually. Why can't we leave the data in its native state and just make it available whenever and wherever the bits and bytes are needed?

That question is why I'm a fan of GAPro, a startup that is trying to rewire the industry to stop these unnatural acts that we perform on data and to make the industry much more efficient. If you share my belief that data should stay in its native state, then I encourage you to read the article on Luddites below, by Chet Gladkowski of GAPro, which lays out the company's argument in detail.

Speaking of rewiring the industry for efficiency...our friends at Pypestream and our friends at EY (yes, we introduced them to each other) made an important announcement this week. EY will help clients implement Pypestream's intelligent messaging, which is cutting the costs of customer service while making customers happier (how many times can you say that with a straight face?) and which is now moving into core operations at insurers, too. Pypestream is becoming the industry standard for chatbots and other aspects of intelligent messaging—as it should, in my humble opinion—and the alliance with EY will accelerate the trend. 

Cheers,

Paul Carroll,
Editor-in-Chief 


Paul Carroll

Profile picture for user PaulCarroll

Paul Carroll

Paul Carroll is the editor-in-chief of Insurance Thought Leadership.

He is also co-author of A Brief History of a Perfect Future: Inventing the Future We Can Proudly Leave Our Kids by 2050 and Billion Dollar Lessons: What You Can Learn From the Most Inexcusable Business Failures of the Last 25 Years and the author of a best-seller on IBM, published in 1993.

Carroll spent 17 years at the Wall Street Journal as an editor and reporter; he was nominated twice for the Pulitzer Prize. He later was a finalist for a National Magazine Award.

Unconnected World, and What It Means

Not everyone wants a connected life. Some will always want to go their own way. Insurers need to be ready.

Late in the 18th century, at the dawn of the Industrial Revolution, a young mechanic named Ned Ludd became famous for destroying two machines. His actions spawned an anti-technology movement that lasted into the early 19th century. Adherents of that movement became known as Luddites, a term that is used to this day to describe an individual or group of people who are against technological progress. As technology has continued to advance, modern-day Luddites have materialized. In today’s increasingly connected world, there are already those that go “off-the-grid,” or at least sympathize with the idea. Since we are on the verge of connecting and automating virtually anything you can imagine, it is worthwhile to ponder the possibility of segments that will oppose this progress or just rebel outright. What will this mean for society and the insurance industry?

First, it is important to explore just how connected the world is becoming. In addition to smart home devices, drones, and connected cars that are receiving so much attention, there are smart things and solutions to address every facet of life. Even items such as toothbrushes, underwear(!), and patio umbrellas have smart versions that are collecting real-time data about their usage and their surroundings. More and more of the world is being monitored, analyzed, and automated.

See also: It’s Time to Act on Connected Insurance  

Increasingly, these smart things are being powered by artificial intelligence which allows the connected things to become animated – taking action without any human intervention. This is leading some people to fear job loss, intrusions into privacy, and in the worst case future scenarios, machine overlords à la The Terminator series. While there are many potential benefits of emerging tech advances for society, it is not difficult to see why there are a growing number of people that fear the prospect of a connected world. Some of these are modern-day Luddites with irrational fears, but others raise important concerns that society must address.

There are a number of key considerations for insurers regarding these concerns and individuals.

  • Customer segmentation: Insurers recognize that smart technologies have great potential to improve safety and security across many situations, lowering the risk for individual and business customers. New products and services, fueled by new partnerships, will be increasingly offered by insurers. At the same time, the actual adoption uptake for connected technologies is still an unknown. There may be large segments of the population that will make the conscious choice not to be connected, and insurers will have to continue to accommodate their needs too.
  • The insurtech movement: Much of the insurtech movement is based on the increasing connectivity and data generated by connected things. Approximately 30% of the 900+ insurtech startups tracked by SMA fall directly into the connected world space, in areas like connected vehicles, smart homes, connected commercial, or connected life and health solutions. Another 14% are digital data/analytics firms. Many of those provide capabilities for insurers to use the new, real-time data sources to gain insights for underwriting, loss control, claims, or other business areas. Although many of these startups are likely to fail, others will succeed and have a key role in reshaping the insurance industry.
  • A digital divide: One scenario predicted by some is a dichotomy between urban and rural environments. The urban centers are more likely to be highly connected, with vehicles, buildings, infrastructure, and medical and educational facilities all contributing to a smart city environment. Rural settings may be relatively unconnected, as the value is questioned by individuals and small businesses and the desire for independence trumps the benefits of connected technology. Insurers should factor in the potential for a growing digital divide between cities and rural environments.
  • Customer interactions: Insurers already have difficulty convincing customers to go green by converting to electronic documents and communications. For lots of them, this is unlikely to change overnight due to the feeling that their electronic footprint is already greater than they would like it to be. Many new ways to communicate with policyholders and agents are being introduced, and insurers are already taking advantage of these. But, not everyone will want to opt-in for these types of interactions.
  • A back-to-nature movement is already evident for some individuals that are looking for a simpler, healthier lifestyle. This has driven part of the transformation in the agriculture and the grocery sectors.
See also: The New Paradigm of Connected Insurance  

Insurers are likely to see implications across many industry verticals as certain segments choose to be unconnected in the future.


Mark Breading

Profile picture for user MarkBreading

Mark Breading

Mark Breading is a partner at Strategy Meets Action, a Resource Pro company that helps insurers develop and validate their IT strategies and plans, better understand how their investments measure up in today's highly competitive environment and gain clarity on solution options and vendor selection.