Download

Underwriting Lessons From the PGA

Outdated analytic techniques can hide strategic opportunities, and up-and-comers will use more sources and more relevant data.

One of the amazing things about where we are in the arc of data changing our lives is that analytic models are pervasive. They are changing our professional lives, for sure, but I was also reminded recently that models can be used in all areas of our lives. Why? Because, golf! As I watched the professional golf Tour Championship, I thought about how analytic models recently helped me to cash in on predictive golf data. For the British Open golf tournament in July, the golf club where I play ran a Pick 5 pool. They divide the field into the Top 5 players and A, B, C and D groups of players. You pick one player from each group, and the handful of people who pick the best-performing groups of five players win some credits in the pro shop. I could have simply made my picks based on research, gut feel for the players and a little knowledge of the game. Instead, in a surprise to nobody, I opted to pick using a big data approach. CBS Sports created a simulation of all the golfers in the field playing the course for the event 10,000 times. They used the current statistics for each player, mapped how those statistics would help or hurt the player on the specific course for the event and then ranked the projected scores for the golfers. I made my picks based on their results. I won the pool for the British Open using this approach. The golfers that the CBS Sports model had as the lowest scorers for each of the groups created the best pick of about 150 picks from my club mates. Where is the win in insurance data? My experience has a corollary meaning for insurance. There is money to be made (and saved) in insurance data modeling by understanding where underwriting is heading with the power of analytics. While we look at what is changing in underwriting, we’ll also look at its impact on insurance profitability, examining three areas in particular:
  • Improving the pool of risk
  • Deeper analysis and new data sources that will drive product innovation
  • Artificial Intelligence (AI) and predictive analytics
Improving the pool of risk Let’s start with the basics and define the pool. Our pool contains insureds (the breadth of the pool) and their data (the depth of the pool). It would be nice, as underwriters, to pick only pools of winners, but criteria that strict would give us pools that are too small to generate premiums, and underwriters would frequently “lose” because their best picks would disappoint them. This is the first lesson from the golf simulation’s success: I didn’t use it to pick the winner of the tournament. I used the model to pick a portfolio of golfers who should have performed better than the others in their group. I actually didn’t have the winner of the tournament in my group. As with putting together a baseball team, picking stocks for a mutual fund or filling any occupation where the performance of a group matters, that we need to build a healthy pool of risk is a “no-brainer.” Actually doing it, however, is more difficult than simply looking at a few key factors. It requires expert data analysis (some of it automated). It requires excellent visibility (into the pool of risk). And, it requires continual monitoring and tweaking (possibly with some assistance from AI and cognitive computing). See also: The Next Step in Underwriting   The basic idea, in summary, is that we need a complete knowledge of the full pool and a better visibility into the life of the individual applicant. Underwriters are trying to create a balanced portfolio. They don’t need to pick a perfect risk, but they need to know who is positioned to outperform their peers.  By figuring out how to identify those above-expectation performers, they are able to skew their portfolio risk lower and out-perform the odds and the market. Deeper analysis, new data sources and “smarter” pools will prepare insurers for product innovation. The second lesson from the golf simulation was this: Every piece of data that is available should be made available in the decision process. In Majesco’s recent report: Winning in a New Age of Insurance: Insurance Moneyball, we look at how outdated analytic techniques can hide strategic opportunities. The risk to insurers is that up-and-comers will evaluate and price risk with more sources of data and more relevant data. Traditional underwriting characteristics will give you “A”, “B” and “C” risks (as well as those you’ll reject), but you won’t see within the peer group to see where there’s value in writing business. Traditional underwriting also assumes that factors don’t change on the applicant once they have entered the pool. And it treats everyone in the pool equally (same premiums, same terms) with the same expected outcomes. But what if pools were built with the ability to tap into more granular data and to adapt forecasts based on current conditions and possible trends? Like looking at a golfer’s ability to play on a wet course, what if we could see how a number of new factors including both personal and global data will affect outcomes?  For example, what if commercial insurers could see how small changes in investor sentiment early in a cycle drive expensive, D&O-covered, class action lawsuits three years (two renewals) later? Look at life insurance. When your company initially accepted Ron as an applicant, it placed him into the A pool. At the time, you only collected MIB data, credit data and some personal data. Since then, you’ve started giving small discounts to the same pool when given access to wearable data and social media data, and you have started collecting Rx reports. In running some simulations, you are realizing that a combination of factors can give you a much better picture of possible outcomes with the new data sources, such as Amazon purchase data or wearable data. What if you set out to improve predictive analytics within the pool by re-analyzing the pool under newer criteria? Perhaps you offer to give wearables at a discount to insureds or free health check-ups to at-risk members of the pool. It could be any kind of data, but the key is continuous pool analysis. Preparation’s bonus: Product agility and on-demand underwriting Every bit of work that goes into analyzing new data sources has a doubly valuable incentive: preparation for next-generation product development. Once we have our data sources in place and our analytics models prepared, we can grasp the real value in the source, creating some redundancy and fluidity to the process. So, if a data source goes away or is temporarily unavailable or it becomes tainted (imagine more Experian breaches), it could be removed without consequence. This new thinking will help insurers prepare for on-demand products that will need, not just on-demand underwriting, but on-demand rating and pricing. As we noted in our thought leadership report, Future Trends 2017: The Shift Gains Momentum, we showed how the sharing economy is giving rise to new product needs and new business models that are using real-time, on-demand data to create innovative products that don’t fit under the constraints of current underwriting practices. P&C insurers, for example, are experimenting with products that can be turned on and off for different coverages … like auto insurance for shared drivers like Uber or Lyft. And this is just the start of the on-demand world. Insurance is available when and where it is needed and priced based on location, duration and circumstances of need. If an insurer has removed the rigidity of its data collection and added real depth to data alternatives, it will be able to approach these markets with greater ease. At Majesco, we help insurers employ data and analytic strategies that will provide agility in the use of data streams. Real-time underwriting will become instant/continuous underwriting. Analytics will be used more to prevent claims than to predict them. Which brings us to the role of artificial intelligence in underwriting. See also: Data Opportunities in Underwriting   AI and predictive analytics Simulations have been in use for decades, but, with artificial intelligence and cognitive computing, simulations and learning systems will become underwriting’s greatest asset. Underwriters who have seen hundreds and thousands of applications can pick out outlying factors that have an impact on claims experience. This is good, and certainly it should continue, but perhaps a better form for picking the winners would be for applications to run through simulations first. Let cognitive computing have the opportunity to pick out the outlying factors and allow predictive analytics to weigh applications and opportunities for protection. (For more information on how AI will affect insurance, be sure to read Majesco’s Future Trends 2017: The Shift Gains Momentum). Machine learning will improve actuarial models, bringing even more consistency to underwriting and greater automation potential to higher and higher policy values. And it will also allow for “creativity” and rapid testing of new products. Can we adapt a factor and re-run the simulation? Can we dial up or dial down the importance of a factor? Majesco is currently working with IBM to integrate AI/cognitive into the next generation of underwriting and data analysis. Perfection is unattainable. But if we aim for the best process we can produce, we can certainly use new sources of data and new methods of analysis to improve our game and take home a higher share of the winnings. How do I know this? Well, the golf club ran a pool for the PGA Championship the month after the British Open. I didn’t win that pool. Out of more than 200 entries — I came in second. Cha-ching!

John Johansen

Profile picture for user JohnJohansen

John Johansen

John Johansen is a senior vice president at Majesco. He leads the company's data strategy and business intelligence consulting practice areas. Johansen consults to the insurance industry on the effective use of advanced analytics, data warehousing, business intelligence and strategic application architectures.

A Reflection on the Las Vegas Slaughter

The tragedy is senseless and irreparable, but on days like this I'm proud I chose a profession that will help restore so many shattered lives.

sixthings
You just never know. Wednesday, Jan. 16, 1991. I was on a flight to the West Coast when Desert Storm started. The pilot came on and told us about President Bush’s speech. He asked us to pray for our solders in harm’s way and for our country. Tuesday, Sept. 11, 2001. I was at a conference in Disney World when a trickle of news reports quickly turned into the media tsunami that forever changed the trajectory of our culture. We gathered in the hotel ballroom to address questions as a group. Over the next couple of days, I had customers and friends melt in my arms, overcome with grief. We comforted one another as we struggled to try and make sense of the terrorist attacks, making arrangements to get people home, renting cars, vans and buses. Friday, July 20, 2012. I was driving to a speaking engagement when I received a panicked call about the shooting in Aurora, CO, where our son and his wife live. They were safe, but he had to report to the scene immediately because some airmen in his charge were in the theater. See also: Time to Mandate Flood Insurance?   Sunday, Sept. 10, 2017. Hurricane Irma cut a wide swath of damage and flooding through central Florida, where we live. Our normally quiet small town is still abuzz with electrical and phone crews feverishly working to restore normal operations, making permanent repairs. Many homes in our area are a patch quilt of blue tarps. FEMA contractors are still removing debris as a convoy of trucks and equipment rumble through neighborhoods. Monday, Oct. 2, 2017. Today, I’m in Las Vegas only to be awakened to the horrific news that we know all too well. I’ve received numerous messages over the entire spectrum of electronic communications, asking about our safety. In these and other events, we will want to learn as much as possible. We want to know the who and struggle with the why. Much will be uncovered over the next hours and days. There are so many open questions waiting to be answered. There is so much that we don’t know. But there is one thing that I know for certain, and I say this in all seriousness and respect. Insurance will play a vital role in the coming days, weeks and months, helping to rebuild lives, families and businesses devastated by this heartbreaking and senseless tragedy. Working in insurance since 1972, I've been humbled over and over again to be part of an industry that helps people. While my career has been on the technology side of the business, there is a quiet assurance, knowing that what we do will help restore lives. At the tender age of 19, I had my first “data processing” interview. It was for a junior terminal programmer trainee position at a large insurance company that no longer exists, paying an exorbitant $7,500 a year. After the interview, I walked to the bus stop and wondered about working for an insurance company. I replayed all the jabs and jokes that we know all too well in my mind that surround the insurance industry. Was I somehow going to be tainted by being a part of a profession that had a reputation equal to that of gas station attendants (true statistic)? See also: Harvey: First Big Test for Insurtech  There have been opportunities to leave the insurance industry over the years. But I kept coming back to the reality that there are precious few professions that can have such a direct, positive effect on the lives of so many as insurance. Yes, we have our problems and detractors. Yes, we can sometimes be our own worst enemy when it comes to public perception. Yes, we could do a better job at communicating to and servicing our customers and the public as a whole. But I count it a personal honor and privilege to serve in the insurance industry. I hope you do also.

Chet Gladkowski

Profile picture for user ChetGladkowski

Chet Gladkowski

Chet Gladkowski is an adviser for GoKnown.com which delivers next-generation distributed ledger technology with E2EE and flash-trading speeds to all internet-enabled devices, including smartphones, vehicles and IoT.

Insurance CROs: Shifting to Offense

A survey finds insurers' chief risk officers engaged on high-priority strategic and business-driven issues.

sixthings
EY’s seventh annual survey of chief risk officers in the insurance industry confirms that companies are starting to move on from the post-crisis era of defensive risk management. While some CROs speak of works in progress or continuing improvements to their company’s risk management efforts, more CROs report they are comfortable with functioning frameworks that provide “defense” for the company. There is continued maturation and increasing sophistication of the role. Some CROs are spending more of their time engaged on high-priority strategic and business-driven issues, such as disruption, innovation and emerging threats, including cybersecurity. See also: The State of Risk Oversight in 2017   CROs are starting to move to offense. They see their roles less in terms of organizational compliance with enterprise risk management (ERM) policies. Nor are they reacting to regulatory requirements. For almost all companies surveyed, Own Risk Solvency Assessments (ORSA) are “job done.” Even CROs at companies that faced challenges related to federal regulation or Solvency II report that such issues are largely behind them. Many of this year’s discussions involved consideration of “what comes next?” As the CRO agenda evolves, significant transitions are underway (see figure 1):
  • From relative stability to disruption
  • From clear and well-understood threats to emerging and unknown risks
  • From serving as a control function to partnering with the business
  • From focusing on the risks of action to promoting innovation and avoiding the risk of inaction
See also: Key Misunderstanding on Risk Management   Where CROs mostly played defense in focusing on compliance and regulatory activities after the crisis, many have started to move on to a more active, business-driven posture, with greater emphasis on adding value through the efficient delivery of ERM. You can find the full EY report here.

20 Likely Changes in Ethics on Claims

Insurance is changing in ways that have profound implications for claims. Questions only occasionally raised before will now become common.

Insurance is changing in ways that have profound implications for claims. Some claims practices will become redundant. Questions only occasionally raised before will now become common. New skills will have to be learned. It’s all very exciting, but also a little daunting. Clearly, the way we think about claims will change, but, at the same time, certain constants will remain: settling claims honestly and fairly, for example. So what are the changes that have implications for the ethics of insurance claims? I want to look at 20 changes that I think will be significant in terms of the ethical challenges facing claims people. The "Ask It Never" Policy As insurers turn from asking questions of the policyholder about the risk to be insured and instead obtain that information through big data, the time of "no questions at all" will approach. What will happen to claims then? If no questions are asked, then non-disclosure becomes obsolete, as does the whole idea of material facts. What will be left for the claims team to review or decide upon? The Personalized Policy A personalized policy will, by its very nature, mean that a claim made upon it will result in an increase in premium. As the public comes to increasingly sense this, how will it influence the way in which claimants approach their claim? Should claims people warn potential claimants that their claim will result in an increased premium? Some claimants will self-fund small, valid claims, although those spending patterns will then be picked up by insurers, which could move the premium anyway. Claims may well become more confrontational, as policyholders sold on the idea of personalization find the consequences unpalatable. What can claims people do to maintain trust in such circumstances? See also: Most Controversial Claims Innovation   Optimizing Claims Decisions The trend toward claims settlements being optimized according to what a claimant may be prepared to accept in settlement fundamentally changes key concepts in insurance. What would be a fair claims settlement in such circumstances? And how would "fair" be determined, and by whom? Claims optimization pushes the claims specialist to the margins, although not out of the process altogether, for optimized settlements will raise questions. Someone may be hard up, but not stupid: They will want to know the basis upon which the settlement they’ve been offered has been calculated, and claims people will have to do the explaining. Correlation and Causation Insurers are using big data to make decisions about individual claims and claimants. Yet big data analysis relies on identifying significant correlated patterns of loss, while individual claims rely on identifying the causation of a loss. That difference is important, for correlation and causation are not the same. You can’t replace a "one to one" technique like causation with a "one to many" technique like correlation. It would be akin to saying that because your claim is like all those others (which were turned down), then we’re going to turn down your claim, too. Hardly a recipe for fairness. So as the tools of artificial intelligence are increasingly applied to claims processes, the extent to which the decisions being made remain fair will have to be closely monitored, both in terms of inputs and outcomes. How will this be done? Reasonable Expectations As data streams all around us (both policyholder and insurer), our ability to understand more about what is happening around us increases. This raises the question of the extent to which a claimant could have reasonable been expected to have been aware of something. If big data knows something, should individual policyholders be expected to know it too? How will insurers start to judge whether a claimant took sufficient notice of something that subsequently influenced the claim? The Sensor Balance As homes, offices and factories become covered in sensors, telling you all sorts of information about the property that you were only vaguely aware of before, so then will increase the number of decisions you’ll be called upon to make. There could be some maintenance required to your roof or drains, and unless it’s done soon, then your insurance could be affected. Or perhaps some machinery has been running for longer than usual, in order to meet some new orders, but the sensors are telling you to shut it down for servicing. That knowledge is being recorded and stored, along with the decisions you take in relation to it. All ready then for your insurer to tap into, should there be a claim. Insurers will now have the information to apply those traditional policy clauses relating to maintenance with a new vigor. How will this play out? The 3 Second Repudiation The 3 second claims settlement made news for Lemonade, but so will a 3 second claims repudiation. After all, giving people what they want as quickly as possible is a quite different experience to giving people what they don’t want as quickly as possible. How will such repudiation situations be managed, and how might claimants react to an almost instant dismissal of their claim? A Smart Contract Just for You Big data, smart contracts and personalized policies that ask no questions of the policyholder all point to a level of individualization that will baffle the typical claimant. A loss covered last time might not be covered next time. A neighbor’s loss may be covered in a quite different way to yours. How do you explain such situations to a claimant who’s knowledge of ‘insurtech’ is zero? If everything is so variable, then might communication turn out to be the claims person’s key skill? The Automation of Fairness As claims processes become increasingly automated, insurers will have to take care not to lose sight of their obligations in terms of the fairness of the decisions being made. Some insurers struggle with this even in today’s relatively straightforward workflow processes, so how they will cope with something like artificial intelligence is a concern. Experience points to this being harder as systems become more complex. A lot will depend on the extent to which those in oversight roles bring challenge and critical thinking to the implementation of such projects. Transparency As claims processes become increasingly automated, should the claimant have the right to be told about this? There’s talk of news written by artificial intelligence ‘bots’ soon having to be flagged as ‘artificial news’. Might the same soon apply to individual decisions on things like claims? If so, then from a European perspective, a claimant’s ‘right to know’ might soon become a more complicated request to fulfill. Upholding Supplier Standards The consensus is that a typical claim function’s supply chain network will continue to grow for some time. And bringing in all of these exciting and new capabilities is fine, so long as everyone is singing the same tune. Insurers have to abide by the ethics of insurance claims, such as covered in rules for fairness, honesty and integrity. So how can a claims director convince her board of fellow directors that their firm’s ethical obligations are being met every bit as confidently as in more analogue times? Has her due diligence taken account of not just the intelligence and energy of those providers of artificial intelligence solutions, but their integrity as well? It’s a challenge best met earlier on. Instantaneous Claims That breed of policies described as ‘mobile, micro and moment’ are all about instant cover for just what you want, when you want it, for as long as you want it, and arranged with a few clicks on your phone. Turn those conveniences around and you have the potential for the instantaneous claim, perhaps only moments after inception – “I bought cover for a bike, got on it, went outside and crashed it”. Such claims have usually been looked upon with suspicion by claims people, on the basis that such a quick loss could not be fortuitous. Yet if you provide cover in this way, why shouldn’t some claims happen in much the same way? This is a change of mindset needed throughout an organization, not just in underwriting. Managing Complexity As more cogs, and more complicated cogs, are added to the overall claims process, the greater the challenge it becomes for them to deliver on the promises that were made at the planning stage. This is an existing problem for claims people in the UK, who have acknowledged that the multiplying layers of many claims systems aren’t delivering the expected results. The answer will not come from artificial intelligence working it out for itself: all AI needs to be trained on historical data. So claims people need to understand complexity and how to manage it. Challenging the Decision Research by one leading insurer in the UK market found that policyholders are less likely to trust an automated decision than one involving a human. So as claims become more automated, insurers could face an increasing number of challenges from individual claimants, asking how the decision on their claim was reached. How will they explain an output from an increasingly ‘black box’ process? They may be tempted to rely on generalized responses, but that isn’t going to work when the claimant appeals to an adjudication service like the UK’s Financial Ombudsman Service (FOS). Organisations like FOS should be working now on how they can get inside that automation and assess the fairness of the outcomes it has been designed to produce. Will they perhaps look to accredit the overall automation, or rely on case by case use of techniques like fairness data mining. Another factor that insurers need to take into account will be claimants turning to the EU’s General Data Protection Regulation and enforcing their right to access the data upon which the decision on their claim was made. Insurers will need to prepare for this, both in terms of the volume of such requests and the complexity of responding to them. Again, the ability of claims people to communicate complex things will become a key skill. Provenance of Data As insurers bring more and more data into their claims processes, especially unstructured data drawn from sources like social media, they will need to be prepared to demonstrate the provenance of that data. In other words, they need to be able answer questions like “where did you get that piece of data from that seems to have been a big influence on my claims decision?” Or “that piece of data is wrong so you need to change your decision.” If you utilize data outside of the context in which it was first disclosed, then the error rate shoots up. Just because a piece of data resides within a system doesn’t establish it as a fact. Significance in Algorithms Pulling all sorts of data together is one thing, but the value that claims people draw from all that data comes from the algorithms that weigh up its significance. At what levels the various measures of significance are set will be hugely important for the outcomes that claimants experience. These introduce options that require judgements and such judgements need to overtly account of ethical values like fairness and respect. See also: How AI Will Transform Insurance Claims   Segmentation of Claimants As claims processes become more automated, so claims people are presented with the opportunity to segment the experience of the various claimant types they engage with. Many insurers currently use software to assess claimants at the ‘first notification of loss’ stage and vary the type of experience they receive. At the moment, this is being used to address claims fraud, but it is unlikely to end there. Artificial intelligence coupled with audio and text analysis will allow insurers to segment non-fraud claimants for all sorts of purposes. The challenge for claims people is just how acceptable some of those purposes might be. For example, what if claimants are segmented according to the amount they are prepared to accept as a claims settlement? All of these new technology platforms introduce options, but just because you have the option to do something doesn’t mean that it’s a good thing. Warnings Ahead New ways of communicating with policyholders offer up the possibility of advance warnings being given of storms, floods and the like. That brings many benefits to both insurer and policyholder, but it also raises the prospect of those warnings having conditions attached. Rather than advice, they could include requirements linked to continuation of certain elements of cover. If the policyholder doesn’t (for whatever reason) respond to those communications, this then introduces possible conflict zones for subsequent claims. The Convenience of Clicking The ease with which cover can be incepted using mobile devices is a great convenience to policyholders at the outset of a policy, but it could turn into a great inconvenience when making a claim. Research shows that we invariably do not read the terms and conditions presented to us when buying a mobile based product or service: it’s just too easy to click accept, especially when the fine print looks even finer on a small screen. So claims people need to be prepared for many more people than at present not knowing about the cover they’ve signed up to, beyond what is indicated by a few well designed icons on a screen. The Language of Claims A subtle change of language has emerged in claims circles in recent years. The service element of what’s on offer is being stressed more than the insurance element. While it’s great to see insurers now paying attention to risk management in their personal lines portfolios, this shouldn’t be at the cost of what is at the heart of an insurance product, which is risk transfer. The danger is that this slow and subtle change will not be picked up by customers until they find out when trying to claim that what they’ve bought is largely a service and not insurance. To conclude. It’s a great time to be in insurance, and I would say even more so in respect of claims, for that is where all the promises inherent in the insurance purchase are fulfilled. Those who recognize the ethics of insurance claims and rise to the challenges outlined above will be those who are trusted in the digital market.

Duncan Minty

Profile picture for user DuncanMinty

Duncan Minty

Duncan Minty is an independent ethics consultant with a particular interest in the insurance sector. Minty is a chartered insurance practitioner and the author of ethics courses and guidance papers for the Chartered Insurance Institute.

Lemonade's New Push: Zero Everything

With Zero Everything, Lemonade customers will no longer need to pay deductibles and have their rate increased when filing claims.

||

Following Lemonade’s first anniversary, I’m thrilled to announce a new product that could change the way people use their insurance. We call it Zero Everything.

With Zero Everything, Lemonade customers will no longer need to pay deductibles and have their rate increased when filing claims. That’s right: No deductibles. No increase in pricing. Nothing.

If you ever had to file an insurance claim, you probably thought well and hard before doing so. You’re in good company. Many people experience anxiety before filing a claim, and in many cases just give up altogether. Filing claims should be a pleasant and reassuring experience. After all, claims are the reason why we all get insurance in the first place.

Reading some of the negative feedback that insurance companies receive highlights just how serious this phenomenon is. Ask agents, and they will tell you that customers are reluctant to file claims, mostly because of two reasons:

The claim is below the deductible. For example, your policy’s deductible is set to $500, and your $450 headphones just got snatched. Tough luck. You’re not going to get a dime out of your insurance. Because the deductible is an amount that’s deducted from the value of the claim, there’s no sense claiming anything below it. In fact, claims that are lower than the deductible will be immediately declined.

Fear of having rates increased. There’s a famous saying -- “past claims are the best predictor of future ones.” This leads insurers to increase the rates for customers who file claims. They see it as a measure to make up for future potential losses from these customers.

See also: Lemonade: World’s First Live Policy  

Zero Everything provides the perfect peace of mind - never worry about paying deductibles or increased policy prices again (as long as there's no abuse, of course).

.@lemonade_inc Zero Everything is the closest thing to having an UNDO button for real life! #GoLemonade

But there’s more. Regardless of the value of your claim, with Zero Everything, you’ll get the full amount needed to replace your items with new ones! Someone stole your $500 bike? We’ll pay you $500 to get a new one!

[caption id="attachment_27885" align="alignnone" width="570"] Zero Deductible, Zero Rate Hikes, Zero Worries — Here’s how it works[/caption]

How it works

When signing up for a new Lemonade policy, look for the Zero Everything section under the settings tab in our quote page. If you already have a renters policy with us, just use our app to edit your Live Policy, go to the settings tab and look for the Zero Everything box. Condo policyholders, Live Policy is coming soon, so just open the app and tap on Ask Us Anything, and we’ll sort you out.

Why zero deductibles do not exist in home insurance today

In the U.S., home insurance companies spend more than $10 billion each year on the bureaucracy of claim handling alone. All of the endless paperwork, faxes and phone calls you hate? Someone has to pay for them.

In fact, small claims often cost incumbents more to process than the size of the claim itself. So, they brand small claims "nuisance claims" and use the deductible as a deterrent, to discourage you from ever filing them.

It’s important to note that there are no bad intentions behind this mechanism; it’s just an unfortunate consequence of the way insurance works.

How AI changes everything

But that’s where AI Jim, our claims bot, changes the game. AI Jim loves small claims; they’re his favorite. He settles them on the spot, with zero hassle and at zero handling costs. That’s because there’s no such thing as a nuisance claim for AI Jim. In fact, on a slow day, AI Jim can review, approve and pay 1,000x more claims than an entire team at one of the traditional insurers.

This kind of fundamental change is made possible by the replacement of manual labor with AI and bots!

So, if deductibles and rate increases get you nervous, I suggest you head on over to one of our apps or website (lemonade.com) and get yourself a Zero Everything coverage in a few seconds.

See also: Lemonade’s Crazy Market Share  

Zero Everything is rolling out in California, Texas and Illinois, where it will first be available for renters and condo policyholders. Follow us for updates on coming availability in NY and NJ and support for homeowners policies.


Shai Wininger

Profile picture for user ShaiWininger

Shai Wininger

Shai Wininger is a veteran tech entrepreneur and inventor, who most recently co-founded Lemonade, a licensed insurance company powered by artificial intelligence and behavioral economics. He previously founded Fiverr.com, the world’s largest marketplace for creative and professional services.

Top 10 Changes Driven

Since the inaugural InsureTech Connect last fall, the amount of smart capital focused on more complex industry issues has soared.

sixthings
With 2017 Insuretech Connect happening this week, below is one industry insider’s top 10 of the notable insurtech changes since the inaugural event this time last year: See also: Insurtechs: 10 Super Agents, Power Brokers  
  1. Early-stage ventures are moving beyond the online/UI experience and are focused on the core industry economics -- i.e. driving down the 56 cents of every premium dollar that is indemnity (loss costs), the further 12 cents needed to assess, value and pay those losses, and the circa 26 to 30 cents required to develop, distribute, select and price product.
  2. There is an increased presence of early-stage-focused VCs that have insurance chops, meaning that high-quality startups focused on more complex industry issues have smart capital for funding (there wasn't much of that last year at this time).
  3. An extraordinary boom in insurtech investment capital means that too many businesses with little chance for success are getting funding. (How many new millennial-focused renters insurance ventures does the market actually need?)
  4. Despite the overwhelming level of capital focused on the space, valuations are generally rational. Yet, there are far too many high-profile investments that seem to make little sense, both in terms of funding levels and valuations. (I can personally attest to being recruited for two roles running pre-revenue startups that received term sheets from investors with pre-money valuations between $30 million and $40 million...exciting for the founder, but irrational in the cold light of day.)
  5. Insurance (viewed by some/many as old school and boring) is showing signs that it can lead in the commercializations of new technologies (IoT, blockchain, telematics, etc.). This can only be positive for attracting "A" talent to our industry.
  6. Lemonade has demonstrated that all of us in the industry can learn something from them. The most recent example is the zero-deductible product (and a no-rate-change protection for as many as two claims), which received unprecedented attention. While this is not new and is already offered by some, the lesson in this case is that being a marketing machine may be worth something (or Dan Ariely, the behavioral economist working with Lemonade, should be hired by us all).
  7. The intractable trend in new risk-taking capital (pensions fund, hedge funds, SWFs, etc.) is leading to "infrastructure light" risk takers -- we now have some smart insurance entrepreneurs jumping in with solutions that enable this structural change.
  8. Well-established insurance vertical solution tech companies are now providing attractive exits for insurtech early-stage companies.
  9. Emergence of insurance-specific hot technologies in areas such as chatbots, machine learning and advanced analytics, etc. seems to be leading (in terms of trial by the insurance industry incumbents) the more established, industry-agnostic solutions -- watch this space!
  10. The industry is all in on insurtech! Witness the presence of public company CEOs' commentary on the topic, the abundance of CVCs, the number of corporate intra-ventures, etc. Also compare and contrast year-over-year presence at this conference.

Andrew Robinson

Profile picture for user AndrewRobinson

Andrew Robinson

Andrew Robinson is an insurance industry executive and thought leader. He is an executive in residence at Oak HC/FT, a premier venture growth equity fund investing in healthcare information and services and financial services technology.

5 Ways to Enhance Client Engagement

Instead of thinking about best practices in our industry, don’t we need to look at how other industries are innovating and see the implications?

As an industry, we love to focus on “best practices.” We examine the strategies and tactics of the most successful advisers and ask how we can replicate those in our own businesses.
  • The upside? Best practices focus us on what’s working today.
  • The downside? Best practices don’t focus us on how things might be changing in future.
It’s fair to say that our collective goal isn’t just to keep pace, but to stay ahead of the curve. And if that’s the case, don’t we need to focus squarely on the trends that will shape our future? Instead of thinking about best practices, don’t we need to: think outside the proverbial box, look at how other industries are innovating and ask how that might affect the future of our own industry? In fairness, we do a good job of this when it comes to things like technology, perhaps because we expect it to play a disruptive force. But what if other aspects of our business – like the way we engage with clients – is also being disrupted? Don’t we owe it to ourselves to consider the future and ask how it will impact the core of what we do? See also: Should You Recommend Castlight to Your Clients?   The Big Five To understand how client engagement is being disrupted we draw on our own on-going investor and advisor research. As (or more) importantly we look outside the industry at case studies of (and research into) innovation in client engagement. We’ve identified the five ways in which client engagement is changing and in ways that will impact you, your clients and your business in future. 1. Client satisfaction is no longer enough. I’ve shared research more than once that highlights the low bar you set if client satisfaction and loyalty are your only goals. The reality is that clients are both satisfied and loyal. Achieving either (or both) makes you just as good as everyone else. In the future, we’ll need to find new metrics to measure success. If that’s the case, how do we measure success? 2. Client engagement is the new client satisfaction. When we create a deeper and more enduring relationship with clients we create a meaningful bond. While great service can drive satisfaction, driving deeper engagement means playing a qualitatively different role in the lives of your clients. Engaged clients see their advisor as providing leadership in areas that extend beyond investments. In the future we’ll need to find ways to proactively demonstrate leadership in the lives of our best clients to add value. If that’s the case, how do we create true value? 3. Value will not be provided, but co-created. While we often focus on the traits of effective leaders, leadership is really a two-way street. In the past value was firm-centric – you decided what you were offering and hoped it sold. It has since morphed to become client-centric, with clients influencing what is offered. We believe that, in the future, the client experience will be actively co-created between advisor and client. In the future, your clients will play a bigger role in defining the experience. If that’s the case, how do we co-create value when the needs of clients differ? 4. Cater to the needs of everyone and you cater to the needs of no one. In order to fully connect with clients, we need to focus our attention on a more defined target. We cannot be all things to all people so your client experience will need to reflect the unique needs of your ideal clients. In the future, advisors will need to build a client experience around a more narrowly defined target or offer. If that’s the case, how can we operate efficiently? 5. ‘Predictable’ and ‘consistent’ are so last year. In an effort to deliver a great client experience the industry moved toward the notion that we need to standardize processes to provide a strong and repeatable offer. While it’s clear that we cannot re-invent the wheel for every client, personalization and connection are the watchwords for the future. Our challenge is to determine how we can create a personalized experience efficiently, drawing on what we know about what is important to our clients and using technology effectively. See also: The New Paradigm of Connected Insurance   If that’s the case, how do we take action? Well, that’s the question, isn’t it? We have some thoughts on that.  I’d like to invite you to join us on May 25th, if you haven’t already registered. We’re going to run a webinar that will put these five ‘imperatives’ in context and focus on what we can do to take action. Join the webinar In fact, helping you craft a plan for the future of client engagement is so important to us we’ve formalized a new suite of programs to help you take action. It’s called The Client Engagement Suite and you’ll hear more about it very soon. Thanks for stopping by, Julie

Julie Littlechild

Profile picture for user JulieLittlechild

Julie Littlechild

Julie Littlechild is a speaker, a writer and the founder of AbsoluteEngagement.com. Littlechild has worked with and studied top-producing professionals, their clients and their teams for 20 years.

A New Framework for Your Analysts

A "competency framework" can help analytics, data science or customer insight leaders in a wide variety of ways.

As we focus on Analytics & Data Science, I’ve been reminded of how a Competency Framework can help. Both work with clients, and my own experience in creating and leading analytics teams has taught me that such a tool can help in a number of ways. In this post I’ll explain what I mean by a competency framework and the different ways it can help analytics, data science or customer insight leaders. I wonder if you’ve used such a tool in the past? Across generalist roles and larger departments, the use of competencies has become the norm for many years, as HR professionals will attest. However, sometimes these definitions and descriptions feel to generic to be helpful to those leading more specialist of technical teams. But, before I get into overcoming that limitation, let me be clear on definitions. A dictionary definition of competency explains it as “the ability to do something successfully or efficiently”. In practice, in business, this usually means the identification of a combination of learnt skills (or sometimes aptitude) & knowledge that evidence that someone has that ability (normally to do elements of their job successfully). HR leaders have valued the ability for these to be separated from experience in a particular role, thus enabling transferable competencies to be identified (i.e. spotting an individual who could succeed at a quite different role). Defining a competency framework Building on this idea of competencies as building blocks, of the abilities needed to succeed in a role, comes the use of the term ‘competency framework’. The often useful, MindTools site, defines a competency framework as: “A competency framework defines the knowledge, skills, and attributes needed for people within an organisation. Each individual role will have its own set of competencies needed to perform the job effectively. To develop this framework, you need to have an in-depth understanding of the roles within your business.” Given many Analytics leaders have come ‘up through the ranks’ of analyst roles, or are still designing & growing their functions, most have such an in-depth understanding of the roles within their teams. Perhaps because HR departments are keen to benefit from the efficiencies of standardised competencies across a large business, there appears to have been less work done on defining bespoke competencies for analytics teams. See also: The Challenges of ‘Data Wrangling’   Having done just that, both as a leader within a FTSE 100 bank and for clients of Laughlin Consultancy, I want to share what a missed opportunity this is. A competency framework designed to capture the diversity of competencies needed within Analytics teams has several benefits as we will come onto later. It also helps clarify the complexity of such breadth, as we touched upon for Data Science teams in an earlier post. The contents of an Analytics competency framework Different leaders will create different flavours of competency framework, depending on their emphasis & how they articulate different needs. However, those I have compared share more in common than divides them. So, in this section, I will share elements of the competency framework developed by Laughlin Consultancy to help our clients. Hopefully that usefully translates to your situation. First, the structure of such a framework is normally a table. Often the columns represent different levels of maturity for each competency. For example, our columns include these levels of competency:
  • None (no evidence of such a competency, or never tried)
  • Basic (the level expected of a novice, e.g. graduate recruited to junior role)
  • Developing (improving in this competency, making progress from basic ‘up the learning curve’)
  • Advanced (reached a sufficient competency to be able to achieve all that is currently needed)
  • Mastery (recognized as an expert in this competency, or ‘what good looks like’ & able to teach others)
Your maturity levels of ratings for each competency may differ, but most settle for a 5 point scale from none to expert. Second, the rows of such a table identify the different competencies needed for a particular role, team or business. For our purposes, I will focus on the competencies identified within an Analytics team. Here again, language may vary, but the competency framework we use at Laughlin Consultancy identifies the need for the following broad competencies:
  • Data Manipulation (including competencies for coding skills, ETL, data quality management, metadata knowledge & data project)
  • Analytics (including competencies for Exploratory Data Analysis, descriptive, behavioural, predictive analytics & other statistics)
  • Consultancy (including competencies for Presentation, Data Visualization, Storytelling, Stakeholder Management, influence & action)
  • Customer-Focus (including competencies for customer immersion, domain knowledge (past insights), engagement with needs)
  • Risk-Focus (including competencies for data protection, industry regulation, GDPR, operational risk management)
  • Commercial-Focus (including competencies for market insights, profit levers, financial performance, business strategy & SWOT)
  • Applications (including competencies for strategy, CX, insight generation, proposition development, comms testing, marketing ROI)
Variations on those are needed for Data Science teams, Customer Insight teams & the different roles required by different organisational contexts. Additional technical (including research) skills competencies may need to be included. However, many are broadly similar and we find it helpful to draw upon a resource of common ‘holistic customer insight’ competencies to populate whichever framework is required. If all that sounds very subjective, it is. However, more rigour can be brought to the process by the tool you use to assess individuals or roles against that table of possible scores for each competency. We find it helpful to deploy two tools to help with this process. The first is a questionnaire that can be completed by individuals and other stakeholders (esp. their line manager). By answering each question, that spreadsheet generates a score against each competency (based on our experience across multiple teams). Another useful tool, especially for organizations new to this process, can be for an experience professional to conduct a combination of stakeholder interviews and review of current outputs. Laughlin Consultancy has conducted such consultancy work for a number of large organizations & it almost always reveals ‘blindspots’ as to apparent competencies or gaps that leaders may have missed. However you design your scoring method, your goal should be a competency framework table & consistent audible scoring process. So, finally, let us turn to why you would bother. What are some of the benefits of developing such a tool? Benefit 1: Assessing individual analysts’ performance All managers learnt that there is no perfect performance management system. Most are, as Marshall Goldsmith once described them, stuff you have to put up with. However, within the subjectivity & bureaucracy that can surround such a process, it can really help both an analyst & their line manager to have a consistent tool to use to assess & track their development. I have found a competency framework can help in two ways during ongoing management & development of analysts:
  • Periodic (at least once a year) joint scoring of each analyst against the whole competency framework, followed by a discussion about different perspectives and where they want to improve. In this process remember also the greater benefit of playing to strengths rather than mainly focussing on weaknesses.
  • Tracking of development progress and impact of L&D interventions. After agreeing on priorities to focus on for personal development (and maybe training courses), an agreed competency framework provides a way of both having clearer learning goals & tracking benefits (did competency improve afterwards).
Benefit 2: Designing roles and career paths Analytics & Data Science leaders are largely agreed that a mix of complementary roles are needed to achieve effective teams. However, it can be challenging to be clear, when communicating with your teams & sponsors, how these roles both differ & work together. Here again a consistent competency framework can help. Scoring each role against the competency maturities needed, can enable a manager to see how whole team scores or any gaps still left. It can also help in more objectively assessing candidates suitability for different roles within a team (e.g. are they stronger at competencies for ‘back office’ modeller or ‘front of house’ consultant type roles). See also: Insurtech: How to Keep Insurance Relevant   If that benefit provides more consistency when considering peer-level opportunities, this tool can also help guide promotion opportunities. It can help you define the different competency maturities needed, for example, by junior analyst verses analyst verses senior analyst verses analytics manager. Such clarity enables more transparent conversations between analysts & their managers (especially when one can compare & contrast an individuals competency score with those needed by different roles). Seeing how those competency profiles compare at different levels of seniority for different technical roles, can also enable a manager to see options for career development. That is, there are often options for junior members of the team (rather than a simple climb up the functional ‘greasy pole’). Examples might be: development of statistical skills to pursue a career path in the modelling roles; development of data manipulation skills to pursue a career path towards Data Engineer; development of questioning & presentation skills to aim for a business partner role, etc. Benefit 3: Identifying your team L&D priorities and where to invest Used together, all the elements mentioned above, can help an Analytics leader identify where the greatest development needs lie (both in terms of severity of gap & number of people impacted). Comparing the competency profiles for roles needed in team, with current capabilities of role holders, can identify common gaps. Sometimes it is worth investing in those most common gaps (for sufficient numbers, it’s still worth considering external training). Then you can also compare the potential career paths & potential for development that managers have identified from conversations. Are there competency gaps that are more important because they help move key individuals into being ready for new roles & thus expand the capability or maturity of overall team? Much of this will be subjective, because we are talking about human beings. But having a common language, through the competency framework tool, can help leaders better understand & compare what they need to consider. Do you use an Analytics Competency framework? If you are an Analytics or Data Science or Customer Insight leader, do you currently use a competency framework? Have you seen how it can help you better understand the capabilities of individuals, requirements of roles & how both best fit together in an effective team? Do you have the means to have meaningful career path conversations with your analysts? Being able to do so can be key to improving your analyst retention, satisfaction & engagement with your business. I’m sure there is a lot more wisdom on this topic from other leaders out there. So, please share what you have found helpful.

Paul Laughlin

Profile picture for user PaulLaughlin

Paul Laughlin

Paul Laughlin is the founder of Laughlin Consultancy, which helps companies generate sustainable value from their customer insight. This includes growing their bottom line, improving customer retention and demonstrating to regulators that they treat customers fairly.

Cyber Crimes Outpace Innovation

IT systems have never been more powerful or accessible to businesses. However, cyber crimes continue to outpace tech innovation.

sixthings
IT systems have never been more powerful or accessible to businesses. However, the scope and scale of cyber crimes continues to outpace tech innovation. For years, the challenge for internal IT and security teams has been to use existing company data to construct an integrated picture of oddities and unexpected actions on their network. Recent advancements in machine learning and behavior or anomaly-based analytics that leverage existing enterprise logs have provided security teams with much more accurate intelligence than ever before. See also: 3 Technology Trends Worth Watching   In the past, security expertise was embodied in signatures, representing particular and specific types of malware. In time, the experts couldn’t keep up, signatures were out of date or not installed quickly enough, and hackers began to take full advantage. An attack from an employee account is signature-less, making conventional security approaches that rely on blacklists ineffective. Security experts quickly realized that pattern patching alone wouldn’t work, so they added rules, such as the correlation rules found in security information and event management (SIEM). For example, if an HR employee has been terminated and begins accessing sales data for the first time, something is likely wrong, and an alert will immediately be sounded. Technology outpaces analysis As the number of endpoints (i.e. mobile devices) skyrocketed, so did the volume of data to be analyzed by firms, making it more difficult for security experts to rely on cut-and-dried rules. Existing—not to mention expensive—intelligence tools, typically some form of SIEM, were supposed to predict and detect these types of threats, but were unable to keep up. This left companies at an all-time vulnerable state for both insider threats and hackers. Experts predict a 4,300 percent increase in annual data production by 2020 and IDC anticipates that the “digital universe” of data will reach 180 zettabytes in 2025 (that’s 180 followed by 21 zeroes). Thankfully, open source big data systems have provided a way to collect, process and manage monstrous amounts of data. Open source big data technologies such as HDFS and Elasticsearch enable solutions that handle petabytes of security data with ease. This not only allows firms to store a wide range of data sources, but also reduces overhead cost of data storage altogether, which can reach millions of dollars annually for large organizations, due to the cost of vendor data management hardware and vendor per-byte pricing models. Consequently open source big data frees up the budget to invest in stronger analytics. Algorithms crunch data Another major advancement that has fortified cybersecurity tools is machine learning. The method of analysis flips the expert approach on its head; instead of requiring expert rule-writers to guess at attacks that might come, machine learning algorithms analyze trends, create behavior baselines—on a per user basis—and can detect new types of attacks very quickly using baselines and statistical models. These systems are more flexible and effective than any pure expert-driven predecessors. See also: Innovation: ‘Where Do We Start?’   Technology options available to enterprises are at an all-time high, and so are the number of cyber crimes that are committed. Fortunately, as technology has advanced, so has the ability to seek out cyber criminals that may have been virtually invisible in the past. User and entity behavior analytics and machine learning technology continue to provide chief information security officers with the accurate insights they need to thwart attacks before severe damage is done. This article originally appeared on ThirdCertainty. It was written by Nir Polak.

Byron Acohido

Profile picture for user byronacohido

Byron Acohido

Byron Acohido is a business journalist who has been writing about cybersecurity and privacy since 2004, and currently blogs at LastWatchdog.com.

Addressing Evolving Cyber Threats

Almost every breach begins with a human being. By understanding how such threats can manifest, risks can be mitigated ahead of time.

In 2015, an accountant looking at the balance sheets of a U.S. tech company noticed a $39 million hole in the figures. The accountant would have been even more dismayed to know where it had gone – a member of the financial team in an overseas subsidiary had transferred it directly to the thief. All the thief had to do was pretend to be a CEO. It’s a kind of attack known as a CEO email attack, and just one of a broad range of hostile tactics known as social engineering attacks. These are attacks that exploit the natural weaknesses of human beings – our credulity, our naiveté, our propensity to help strangers and, sometimes, in the case of phishing attacks, just our greed – to get around security systems. To put it in the language of 21st century cyber security: Social engineering operates on the idea that, just like any computer system, human beings can be hacked. In fact, a lot of the time they’re much easier to hack than computers. Understanding this fact, and the forms that social engineering can take, is essential to formulating a robust defense strategy. These strategies are even more important now, as the lines between the physical and digital worlds continue to blur and the assets at risk continue to multiply, thanks to the proliferation of connected technologies. In Depth From the serpent in the Garden of Eden, to the fake phishing emails that promise fortunes if only you’d just part with your bank details and Social Security number, social engineers have been with us for a while. But few epitomize their arcane arts quite like Frank Abagnale, whose exploits between the ages of 15 and 21 were immortalized in the Steven Spielberg film Catch Me If You Can. During those years, Abagnale posed as a doctor, a lawyer and an airline pilot and has become one of recent history’s most legendary social engineers. He now runs a consultancy, Abagnale and Associates, that aims to educate others – including government agencies such as the FBI, and numerous businesses – on how to catch people like him, as social engineering methods shift. Abagnale asserts: “Some people used to say that I’m the father of social engineering. That’s because, when I was 16 years old, I found out everything I needed to know – I knew who to call, and I knew the right questions to ask – but I only had the use of a phone. People are doing the same things today 50 years later, only they’re using the phone, they’re using the mail system, they’re using the internet, email, cloud. There’s all this other stuff, but they’re still just doing social engineering.” We live in an overwhelmingly digital world, and the projected 50 billion Internet of Things (IoT) devices due to be hooked up to the internet by 2020 means the already broad frontier of digital risk will only continue to grow. “I taught at the FBI for decades. There is no technology today that cannot be defeated by social engineering,,"Abagnale says. Making sure the human links that sit between this expanding set of digital nodes remain secure lies at the heart of securing the whole system; one increasingly tied up with physical as well as digital assets. New Risks In 2010, the Stuxnet worm, a virus believed to have been developed jointly by the U.S. and Israeli military, managed to cause substantial damage to centrifuge generators being used by the Iranian nuclear program. The virus was designed to attack the computer systems that controlled the speed that components operated in industrial machinery. By alternately speeding up and slowly down the centrifuges, the virus generated vibrations that caused irreparable mechanical damage. It was a new breed of digital weapon: one designed to not only attack digital systems, but physical systems as well. It was physical in another way. To target this system, the virus had to be physically introduced via an infected USB flash drive. Getting that flash drive into a port, or into the hands of someone who could, required human beings to intervene. In this case, anonymous USB devices were left unattended around a facility and were then accidentally inserted by unwitting technicians. See also: It’s Time for the Cyber 101 Discussion   The Stuxnet worm highlights the extreme end of the dangers that lie at the overlap between digital technology, physical assets and human beings, but the risks extend well beyond that. More prosaic, for instance, are email scams that work by tricking the receiver into sharing vital information – remember the notorious “Nigerian prince” emails, where a fraudster would promise a willing helper untold riches in return for money to be released from jail? Some of these scammers have elaborate networks that crossed countries and continents and can be worth more than $60 million. Move the concept into the organization now: Imagine receiving an email from someone purporting to be your boss, asking in an official and insistent tone for a crucial keyword or a transfer of funds. Could a typical employee be relied on to deny that request? What about a phone call? This was hacker Kevin Mitnick’s strategy. In a way, a Frank Abagnale of the digital age, Mitnick managed to make a range of high-profile attacks on key digital assets by just phoning up and asking for passwords. IoT: The Convergence of the Physical and Cyber Worlds “Humans are the weakest link in any security program,” says Dennis Distler, director, cyber resilience, Stroz Friedberg, an Aon company. In fact, it’s us, rather than computer systems’ weaknesses or failures, that lie at the heart of around 90% of cyber breaches. Social engineering attacks can come in various forms, and the risk from them will never be fully mitigated. But while full mitigation is impossible, you can limit your exposure – that strategy begins at the individual level. Humans are the targets, so the first line of defense has to be from humans. “You certainly remind people that you have to be smarter, whether you’re a consumer or CEO. You have to think a little smarter, be proactive, not reactive,” Abagnale says. While social engineering has a focus on financial loss, the focus of cyber risk is shifting to tangible loss with the potential for property damage or bodily injury arising out of IoT devices. Historically, cyber risk has been associated with breaches of private information, such as credit cards, healthcare and personally identifiable information (PII). More and more, however, the IoT – the web of connected devices and individuals – will pose an increased risk to physical property as breaches in network security begin to affect the physical world. Having a better understanding of vulnerabilities and entry points – both at the individual as well as device level – will be critical for organizations in 2017 and beyond. Organizational Mitigation While security awareness training and, to a lesser extent, technology can prevent successful attacks – whether IoT-related, human error or stemming from actual social engineering – the risk from them will never be fully mitigated. Organizations can take a number of steps to protect themselves. Distler of Stroz Friedberg, highlights a number of key steps a company can take to minimize exposure to social engineering risk:
  • Identify what and where your organization’s crown jewels are. A better understanding of your most valuable and vulnerable assets is an essential first step in their protection.
  • Create a threat model to understand the types of attacks your organization will face and the likelihood of them being exploited. From email phishing to physical breaches, the threat model can help teams prioritize and prepare how to best respond.
  • Create organization-specific security awareness training addressing what types of attacks individual employees could expect, how to detect them and what the protocols for managing and reporting them are. Consider instituting a rewards program for reporting suspected attacks to further encourage vigilance.
  • Provide longer and more detailed training for high-valued or vulnerable targets, such as members of the C-suite and their executive support staff, or members of IT, finance, HR or any other employee with access to particularly sensitive information. This training could vary from account managers to mechanical engineers working on major operational projects. These enhanced training procedures could include red-teaming exercises, which test the ability of selected staff to respond to these breaches in real time.
  • Create well-defined procedures for handling sensitive information and provide routine training on these procedures for employees who handle sensitive information.
  • Conduct routine tests (recommended quarterly at a minimum) for the most likely social engineering attacks.
Preparing for Tomorrow’s Breaches The term “cyber threat” is becoming more and more complex. No longer is it a threat posed to digital assets by viruses and malware or a financial threat posed to individuals and financial institutions. Now, cyber risk encompasses a broad range of risks with the potential to harm assets, from property to brand and reputation. And at the center of all of these interactions are people. Almost every breach begins with a human being. By understanding how such threats can manifest, and how to deal with them when they do, risks can be mitigated ahead of time. Bringing together various functional groups within an organization will be crucial as teams prepare for the more multifaceted risks of our increasingly connected future.

Stephanie Snyder

Profile picture for user StephanieSnyder

Stephanie Snyder

Stephanie Snyder is the national sales leader for Aon’s professional risk solutions practice, focusing on E&O and cyber sales, as well as Aon’s unique value proposition for cyber risk.