Download

Where Can You Find 21st Century Growth?

Many firms aspire to "customer-centricity." Far fewer achieve both improved customer experiences and sustainable profit growth.

sixthings
Having spent a number of weeks speaking at, or chairing, various industry events, I've seen how firms are nervous about the rise of fintech/insurtech and are convinced they should focus on digital and technology for growth. Is that right? Does success and sustainable growth come from that focus? In response, I want to share a two-part blog post, based on a talk that I’m giving to mutual lenders in London. The topic concerns where to focus as a 21st century business. I hope it will help leaders grappling with competing demands on their time and attention. Here is part one. Where to focus? (Winds of change) In our ever-changing world, where should you focus to succeed with improving performance and readiness for the future? Many social and business commentators will highlight key trends. These include:
  • The rise in the power of consumers (including transparency and ease of switching)
  • The erosion of trust in organizations (especially financial services)
  • Increased regulation (including conduct risk)
  • Rise in services expectations driven by experience from other sectors
  • Emergence of technology disruption/innovation scene
All these combine to make it ever more urgent for lenders to regain trust, by both meeting service expectations and communicating more appropriately. Both of those twin aims are informed by better customer insight: genuinely understanding your customers and the jobs they want to get done (including when), better than your competition. See also: How to Take a Bold Approach to Growth   Fortunately, alongside the societal and technology challenges I listed above, opportunities also present themselves. These include:
  • Increased availability of wider range of data and computing power to analyze
  • Growth in analytics tools/market (including data science and statistics)
  • Improvements in marketing automation, service bots and personalization capability
  • Evidence from disrupters (like Metro Bank and Aldermore) of service differentiation
  • Financial Conduct Authority (FCA) focus on behavioral economics giving opportunities in "regulatory sandbox"
Based on my experience of both creating/leading such teams for more than 15 years and coaching leaders across the U.K. and Europe, I see tremendous commercial potential. However, too many firms rush into hiring data scientists without clear business goals. As too many have discovered to their cost, the real value comes from improved customer insight (not technology or data science for their own sake). On the encouraging side, a number of studies have shown that businesses that make extensive use of analytics can outperform their peers. But, just as has previously been seen with the "hype cycles" of data warehousing and customer relationship management (CRM), a lot of technology spending can also be wasted. How can businesses avoid that pitfall? Many lenders (and financial services firms more broadly) aspire to "customer-centricity" as a business strategy. Far fewer achieve both improved customer experiences and sustainable profit growth as a result. Insights 2020 findings A key global study focused on understanding why some businesses succeed at this challenge, while others fail, was Insights 2020. As reported in Harvard Business Review, this collaboration talked to more than 10,000 practitioners and 330 leaders across more than 60 countries. Insights 2020 identified three factors that distinguished those who achieved customer-centricity, measured through customer satisfaction, digital engagement and commercial return. These three priorities are:
  1. Purpose-led, data-driven, consistent customer experiences (multi-channel/journey)
  2. Embedded customer obsession in culture (decision-making, performance management and embracing experimentation)
  3. Customer insight team is an active, equal business partner
Getting clear on customer insight What do I mean by customer insight? Different organizations will have different answers. Some appear to equate the term with research, others with analytics. A few relate it to targeted database marketing, and almost everyone can see the importance of quality data for any such work. Benchmarking best practice within customer insight, especially for financial services firms, has taught me that a more holistic approach works best. The most capable teams combine technical skills in data, analytics, research and database marketing. But, as the saying goes, it’s what you do with it that counts. Using those technical skills in concert, to achieve a deeper understanding of your customers that enables behavioral change, is where true customer insight lies. My own definition of customer insight is: “A non-obvious understanding about your customers, which if acted upon, has the potential to change their behavior for mutual benefit.Key strengths needed (including soft one) To achieve that depth of insight and impact requires two key strengths. First, the use of the use of the four technical disciplines I mentioned above, in concert to produce synergy. I normally explain this through use of Laughlin Consultancy model for Holistic Customer Insight, a virtuous circle of how to operate in multi-disciplinary teams. The second strength needed is analysts who can speak to your business. There is no point discovering great insights into your customers if these remain on the shelf. For that reason, several leading customer insight teams have benefited by investing in softer skills training for their technical teams. The model I use is a nine-step model, from incisive questioning (to determine real business need) all the way through to following up to ensure insights are acted upon in the business (to achieve customer and commercial targets). See also: The Formula for Getting Growth Results   Your data foundation For most organizations, reaching that level of capability begins with a focus on data. When speaking with leaders from across many different sectors, I find that a perennial headache is either getting the data they need or being able to achieve a single customer view. Such a focus on data, as the foundation of customer insight and customer-centricity, makes sense. However, I would like to make a plea for a focus on two aspects that are too often neglected. Data models and metadata may sound too much like topics for technophiles or data geeks, but lack of both can have big business impacts. Faced with the challenge of capturing and using more data, egged on by technology suppliers, too many companies leap straight into a technology solution and technical build. However, with the pace of change and ever-growing list of data that may be needed (considering the growth of Internet of Things, for example), businesses need a more sustainable and technology-independent map. That is the role of the too-often-neglected conceptual and logical data models. These should be treated like blueprints for your business ecosystem. Alongside that data gap, another common lack for analytics team is missing metadata. That is data about data. In all the excitement to gather more facts about customer segments, or potential triggers for marketing campaigns, the basic need for things like a data dictionary can be missed. Many insight or analytics teams rely on what senior analysts hold in their heads. But the expertise about what different data items mean, which can be trusted and how to interpret different values – all this is too valuable to allow it to walk out of the door. What next? That’s it for part 1. Part 2 coming soon….

Paul Laughlin

Profile picture for user PaulLaughlin

Paul Laughlin

Paul Laughlin is the founder of Laughlin Consultancy, which helps companies generate sustainable value from their customer insight. This includes growing their bottom line, improving customer retention and demonstrating to regulators that they treat customers fairly.

Smart Things and the Customer Experience

The potential implications for insurers are great, but they must focus on what has too often been a lousy customer experience.

sixthings

The inanimate world around us is coming alive, powered by smart things and AI. It is difficult to name an object for which there is not a smart version.

Garage doors, thermostats, doorbells, appliances? Check.

Shoes, belts, hats, shirts? Check.

Cars, trucks, boats, drones? Check.

Just about anything you can imagine, and some bizarre things that would probably never cross your mind, have smart versions that connect to the internet and can be controlled by mobile apps or even take action on their own. The potential is great, and the implications for insurance are many. But one thing about smart things that has a mixed record so far is how humans communicate with them. In some cases, the customer experience is well-thought-out and will contribute to adoption. In other cases, the experience is downright awful.

Without naming specific companies, here are a few examples of good and bad experiences with smart things.

  • Smart TVs: I am starting here because some of these are terribly frustrating. Many require interaction via remote control devices, pop-up keyboards on the TV screen and the down-down-over-over-over maneuvering on the keyboard for EACH LETTER. It reminds me of the early texting days with triple taps.
  • Smart tags: Small devices that attach to keys, slide into wallets or get packed into suitcases are widely available. I’ve tried many of these devices and have discovered that some are simple, fast and easy to install and use, while others are a nightmare. One device I ordered was extremely hard just to get out of the package! Another one required you to slide it open to install a battery, but I almost gave up trying to pry it open. Alternatively, I have some that I use that took me less than a minute to set up, and they just work.
  • Telematics devices: There seems to be a migration away from dongles, which is a good thing. In some cars, you have to be a contortionist to get your body into position to plug the dongle into the OBD port. Mobile-app-based telematics are easier to set up, and the user interfaces are usually modern.
  • Wearables: I’ve had three different fitness wearables. Generally, the experience is good, although sometimes the data entry to set up a profile and do regular logging gets tedious.
  • Vehicle information/entertainment systems: The ability to initiate a phone call or change the radio station with a voice command is great – when it works. There are some commands that are just never interpreted correctly, or never interpreted at all.
See also: How to Make Smart Devices More Secure  

I could continue with examples of smart home devices, virtual reality/augmented reality headsets and glasses and other smart objects. Many of you can relate from your own experiences: some are slick, easy and fun – and others tedious and frustrating. There are several lessons here that insurers should keep in mind in any venture where they are providing or leveraging smart devices to policyholders.

  • Recognize that customer experience goes beyond the mobile app. The ordering, shipping, opening the box and reading the initial instruction booklet are all part of the experience. Some insurers discovered how important this can be after sending out dongles for telematics devices.
  • Make sure it works! I have returned more than one smart item, including a bathroom scale that was supposed to synch with a fitness wearable that never worked, even after several calls to tech support. It is the ultimate poor customer experience when something does not work as advertised.
  • Resist the urge to collect too much information. Especially during set-up, just collect what is minimally required to get it going, not extra information that you desire for marketing and other purposes. When an individual buys a smart device, he is anxious to get it up and running.
  • Ensure that tech support is accessible. “Fill out this form, and we will contact you within the next 48 hours” is not a good way to go. Most people are excited about their new device and don’t want to wait this long for a response. At the very least, provide a live chat session.
See also: ‘It’s the Customer Experience, Stupid’  

The connected world of smart things is exciting and offers many possible ways to enrich our daily lives, improve business operations and make the world safer. The functionality of a smart device is very important. But don’t forget that the customer experience will play a large role in the adoption of smart things.


Mark Breading

Profile picture for user MarkBreading

Mark Breading

Mark Breading is a partner at Strategy Meets Action, a Resource Pro company that helps insurers develop and validate their IT strategies and plans, better understand how their investments measure up in today's highly competitive environment and gain clarity on solution options and vendor selection.

Now Is the Time for Cyber to Take Off

Citing a lack of historical data, insurers are being cautious about writing cyber policies--just as technology is opening a huge opportunity.

sixthings
Uncertainty about several key variables appears to be causing U.S. businesses and insurance companies to move cautiously into the much-heralded, though still nascent, market for cyber liability policies. Insurers continue to be reluctant to make policies more broadly available. The big excuse: Industry officials contend there is a relative lack of historical data around cyber incidents, and they bemoan the constantly evolving nature of cyber threats. This assessment comes in a report from the Deloitte Center for Financial Services titled: Demystifying Cyber Insurance Coverage: Clearing Obstacles in a Problematic but Promising Growth Market “Insurers don’t have sufficient data to write coverage extensively with confidence,” says Sam Friedman, insurance research leader at Deloitte. But the train is about to leave the station, and some of the stalwarts who shaped the insurance business into the ultra conservative (read: resistant to change) sector it has become could very well be left standing at the station. Consider that regulations imposing tighter data handling and privacy protection requirements are coming in waves. Just peek at the New York Department of Financial Services’ newly minted cybersecurity requirements or Europe’s recently revamped General Data Protection Regulation. With cyber threats on a steadily intensifying curve, other jurisdictions are sure to jump on the regulation bandwagon, which means the impetus to make cyber liability coverage a standard part of everyday business operations will only increase. Meanwhile, cybersecurity entrepreneurs, backed by savvy venture capitalists, are moving aggressively to eliminate the weak excuse that there isn’t enough data available to triangulate complex cyber risks. In fact, the opposite is true. Modern-day security systems, such as anti-virus suites, firewalls, intrusion detection systems, malware sandboxes and SIEMS, generate mountains of data about the security health of business networks. And the threat intelligence systems designed to translate this data into useful operational intelligence is getting more sophisticated all the time. See also: Why Buy Cyber and Privacy Liability. . .   And while large enterprises tend to have the latest and greatest of everything, in house, even small and medium-size businesses can access cutting-edge security systems through managed security services providers. Meanwhile, big investments bets are being made in a race to be the first ones to figure out how to direct threat intelligence technologies to the task of deriving the cyber risk actuarial tables that will permit underwriters and insurers to sleep well at night. One cybersecurity vendor to watch in this arena is Tel Aviv, Israel-based InnoSec. “Cyber insurance policies are being given out using primitive means, and there’s no differentiation between policies,” observes InnoSec CEO Ariel Evans. “It’s completely noncompetitive and solely aimed right now at the Fortune 2000. Once regulation catches up with this, cyber insurance is going to be required. This is around the corner.” InnoSec was busy developing systems to assess the compliance status and overall network health of companies involved in merger and acquisition deals. It now has shifted to seeking ways to apply those network assessment approaches to the emerging cyber insurance market. At the moment, according to Deloitte’s report, that market is tepid, at best. While some have predicted U.S. cyber insurance sales will double and even triple over the next few years to reach $20 billion by 2025, cyber policies currently generate only between $1.5 billion and $3 billion in annual premiums. Those with coverage in minority As of last October, just 29% of U.S. business had purchased cyber insurance coverage despite the rising profile of cyber risk, according to the Deloitte report. Such policies typically cover first- and third-party claims related to damages caused by a breach of personally identifiable information or some derivative, says Adam Thomas, co-author of the Deloitte report and a principal at the firm. In some cases, such policies also might cover business disruption associated with a cyber incident. The insurance industry contends it needs more businesses to buy higher-end, standalone cyber insurance policies, until enough claims data can be collected to build reliable models, much as was done with the development of auto, life and natural disaster policies. But businesses, in turn, aren’t buying cyber policies in enough numbers because insurers are adding restrictions to coverage and putting fairly low limits on policies to keep exposure under control. “It is a vicious cycle,” Friedman says. “Insurers recognize that there is a growth opportunity, and they don’t want to be left out of it,” he says. “On the other hand, they don’t want to take more risk than they can swallow.” While the insurance industry gazes at its navel, industry analysts and cybersecurity experts say the big challenge—and opportunity—is for underwriters and insurers to figure how to offer all businesses, especially small- and medium-size companies, more granular kinds of cyber policies that actually account for risk and provide value to the paying customers. “What they’re doing now is what I call the neighbor method,” InnoSec’s Evans says. “You’re a bank, so I’ll offer you a $100 million policy for $10 million. The next guy, he’s a bank, so I’m going to offer him a $100 million policy for $10 million. It has nothing to do with risk. The only place this is done is with cyber.” Talk in same terms This is due, in part, to a lack of standard terminology used to describe cyber insurance-related matters, says Chip Block, vice president of Evolver, a company that provides IT services to the federal government. The SANS Institute, a well-respected cybersecurity think tank and training center, last year put out a report that drills down on the terminology conundrum, including recommendations on how to resolve it, titled Bridging the Insurance/Infosec Gap. And the policies themselves have been another factor. “If you compare car insurance from Allstate and Geico, a majority of the policies are relatively the same,” Block says. “We haven’t gotten to that point in cyber. If you go from one underwriter to another, there is no common understanding of the terminology.” Understandably, this has made it hard for the buyer to compare policies or to determine the relative merits of one policy over the other. Block agrees that cyber policies today generally do not differentiate based on risk profile—so a company that practices good cyber hygiene is likely to see no difference in premiums as compared with one that doesn’t. See also: How Data Breaches Affect More Than Cyberliability   Industry must get moving InnoSec’s Evans argues that even though cybersecurity is complex, the technology, as well as best practices policies and procedures, are readily available to solve the baseline challenges. What is lacking is initiative on the part of the insurance industry to bring these components to bear on the emerging market. “This is absolutely possible to do,” she says. “We understand how to do it.” Putting technological solutions aside, there is an even more obvious path to take, Friedman argues. Resolve the terminology confusion and there is little stopping underwriters and insurers from crafting and marketing cyber policies based on meeting certain levels of network security best practices standards, Friedman says. “You look at an organization’s ability to be secure, their ability to detect intrusions, how quickly they can react and how much they can limit their damage,” he says. “In fact, insurers should go beyond just offering a risk-transfer mechanism and be more aggressive in helping customers assess risk and their ability to manage and prevent.” Thomas pointed to how an insurance company writing a property policy for a commercial building might send an engineering team to inspect the building and make safety recommendations. The same approach needs to be taken for cyber insurance, he says. “The goal is to make the insured a better risk for me,” he says.

Byron Acohido

Profile picture for user byronacohido

Byron Acohido

Byron Acohido is a business journalist who has been writing about cybersecurity and privacy since 2004, and currently blogs at LastWatchdog.com.

Most Controversial Claims Innovation

The innovation -- claims optimization -- requires quite a number of radical assumptions about what insurance is there to do and how it should do it.

sixthings
Pushing at boundaries and challenging traditional notions is what innovation is all about. Rethinking something that people have taken for granted for too long can open opportunities for enhancing service, increasing efficiencies and generating revenue. Take claims. The idea that the claim settlement should reflect the insured loss has held sway for a great many years. Is it time for innovators to knock this one off its pedestal? It looks like that time has come, for insurers in the U.S. are starting to turn to claims optimization. To explain this, I should start with the perhaps more familiar concept of price optimization. The price version has the insurer setting the premium for insuring a risk according to what the policyholder is prepared to pay, rather than the level of risk that the policy is presenting. It involves using big data to work out the price at which particular types of customer will start to look for alternative quotes, and then progressively raising the premium to just below that amount. See also: Innovation Challenge for Commercial Lines   A leading figure in the U.K. insurance market recently described price optimization to me as “recognizing the true lifetime value of the customer and reflecting that in a better price.” It’s a nice quote, so long as you’re the insurer on the receiving end of that "lifetime value" and that "better price" and not the customer paying for them. I’ve yet to hear how that description fits in with the regulator’s interest in fair outcomes for customers. It would struggle to do so, for reasons I explain in this paper I wrote for the U.K.’s Chartered Insurance Institute last year. In short, the nature of insurance makes price optimization highly questionable on ethical grounds. It’s hardly surprising that a recent PwC survey found that 72% of insurance CEOs think it will be harder to sustain trust in a digitized market. With claims optimization, insurers would seem to be abandoning all remaining hope of sustaining trust in a digitized market. Claims optimization involves using big data to establish the amount that a particular claimant would be prepared to accept as settlement of the claim. So if all those algorithms pinpointed the claimant as someone in financially tight circumstances, then the settlement offered to that claimant would be optimized to reflect a greater and more immediate need for cash. This would involve tweaking the comparative speed of a cash settlement versus a replacement service and setting relative offers to achieve the optimized position. This still falls neatly within that description of optimization as “recognizing the true lifetime value of the customer and reflecting that in a better price.” Indeed, an insurer happy to optimize on price would hardly need to bat an eyelid at optimizing that other value determinant: claims expenditure. After all, the insurer would say, if a claimant is prepared to accept that lower settlement, why shouldn’t the insurer offer it? Let’s be quite clear: Claims optimization is an exploitation of the unequal balance of information and economic power between a consumer and an insurer. It is unprofessional, and it is unethical. Might this view perhaps point to me being a naysayer on innovation? Not at all – I have a postgraduate degree from one of the earliest courses in the U.K. on the study of scientific and technological innovation. That course taught me to recognize the multi-sided nature of innovation and to see that it is not some nature force of business evolution, but a mix of social, economic, philosophical and technological drivers. See also: The Great AI Race in Insurance Innovation   Despite what vendors of "big data solutions" say about optimization, it is not a natural next step for underwriting and claims. It is home to a number of quite radical assumptions about what insurance is there to do and how it should do it. It would be nice to see some of those assumptions brought out into the open and debated, but, unless Andrew Bailey at the FCA follows through on some of his hesitations about price optimization, then I fear such a debate is unlikely. I have heard some chief executives at leading insurers wonder whether price optimization was the right thing to do. If, as it would appear, they no longer have any such qualms on the pricing side, then it is but a small step to their being equally relaxed about using it on  the claims side. In that case, I look forward to listening to them explain their reasoning for doing so on a public forum like, say, the BBC Radio’s Today program.

Duncan Minty

Profile picture for user DuncanMinty

Duncan Minty

Duncan Minty is an independent ethics consultant with a particular interest in the insurance sector. Minty is a chartered insurance practitioner and the author of ethics courses and guidance papers for the Chartered Insurance Institute.

The Bad Actors Among Lead Sellers

Bad actors cause significant problems for lead buyers, but industry-wide use of technology can tackle the issue.

sixthings
In today’s online shopping environment, lead sellers and lead-buying marketers alike work well together toward the same goal of delivering a great customer experience; however, they often struggle with an array of challenges along the way. The Evolution of the Lead Seller-Lead Buyer Relationship The relationship between online lead sellers and lead buyers began as a very simple one, in the late '90s. The lead seller was also the lead generator—who dealt directly with the lead buyers. But it wasn’t long before small- and medium-sized publishers realized they could make money by generating leads that they would then sell to the large lead generators that had direct relationships with the lead buyers. This resulted in lead generators that would also aggregate leads generated by smaller publishers. By the mid-2000s, the large generators and aggregators began to monetize individual leads further by sharing them with other large generators and aggregators that had unique end-buyer relationships. Initially, this transpired in a private, trusted transaction environment with strict rules in place that were easily enforced. Two companies worked together to maximize the monetization of each other’s leads. Collectively, they made more money, and the consumer had more options available. It was a win-win-win for everyone in the ecosystem. As more of these private sharing arrangements developed (driven by the additional monetization opportunity), the technology evolved to support it. Hence the ping post ecosystem, which publishers, aggregators and generators leverage to best monetize leads. Much like in a stock exchange, sellers and buyers come together to create an efficient market with a variety of options for the consumer, and the highest bidders are typically the entities with the best consumer offer. See also: Changing Business Models, ‘New’ ERM   The Present State of the Lead Seller-Lead Buyer Relationship In insurance, the relationship between sellers and buyers is generally strong, as long as the publishers, aggregators and generators play by the rules. For example, leads should only get sold to a certain number of buyers in a shared-lead world, and exclusive leads should only be sold to one buyer. Other examples of rules include:
  • No manipulation of the consumer data
  • No recycling of leads later
  • No fake leads
  • No non-TCPA-compliant leads
  • No incentive-driven leads
  • No unauthorized sales to stated end-buyers
It is the “bad actors” that don’t play by these rules in the ping post ecosystem that can cause significant problems. We can look to the evolution of the mortgage and education verticals to learn how to solve these problems. Solutions for Today’s Lead Seller Challenges TCPA Compliance. One of the most stressful challenges that lead sellers face today is TCPA compliance. Given that TCPA case filings increased more than 940% between 2010 and 2015, coupled with the fact that consumers are being encouraged to file suits by some law firms, the TCPA has become a huge hurdle for sellers to overcome. Both lead sellers and buyers must avoid exchanging non-TCPA compliant leads and make sure they have persuasive evidence of consent in the event they, or end-buyers, face a complaint or lawsuit. Measuring Consumer Intent. Another challenge sellers face is gaining the ability to measure the intent of each consumer. With insights into the individual consumer journey, you gain the ability to measure the intent—and therefore the value—of each lead. There are technology solutions available that enable you to measure consumer intent. Those Bad Actors Not Playing by the Rules. Many lead buyers are actively leveraging technology to validate consumer data as “good data” and some using de-duping solutions to minimize buying the same lead twice. In the ping post ecosystem, much of the data on the origin and history of a lead is “contributed data." The challenge of eliminating old or recycled leads, dupes, fake and no-intent leads stems from a lack of ability to verify that “contributed data” as fact. For example, an insurance lead aggregator buys a lead from another aggregator or from a generator and agrees to only sell the lead once and only to one, specific insurance provider. This is contributed data, but there is no way to validate it as fact. What sometimes happens is a bad actor will sell that lead to other insurance providers or hold it for a week or so then sell it again—a recycled lead. There is no transparency and little accountability. To validate contributed data as factual, you have to establish a “chain of custody” to verify that each lead seller participating in the ping post system is playing by the rules. Then, if there is ever a problem or complaint, you have data to help the lead generator or buyer that is experiencing a problem identify where in the chain the problem occurred and expose the bad actor. The Most Crucial Area for Improvement Improving Lead Value - To continually improve relations with their buyers, sellers always seek ways to cultivate greater value in their leads. The simplest solution is for sellers to distribute the highest-intent leads possible and do everything possible to eliminate selling “no intent” leads to their clients. To best accomplish this, sellers must require any upstream publishers and generators to adhere to the simple rules that sellers and buyers have established and have a mechanism in place to verify any contributed data surrounding exchanged leads. If anyone is still following the antiquated practices of a bad actor, it’s going to catch up to them eventually. See also: Developing Programs for Shifting Channels   Bad actors are bad news for the entire ecosystem, leaving a bad taste in the mouth of buyers, that can cloud relationships with reputable sellers and result in a deterioration of value to all participants. By exposing the bad actors, sellers can avoid a race to the bottom, ensure they deliver a great consumer experience and deliver high-intent leads—and the resulting growth opportunities—to buyers. Technology is available to the lead-gen industry today to enable the chain of custody and associated data trail. We encourage everyone to join this insurance industry initiative.

Jaimie Pickles

Profile picture for user JaimiePickles

Jaimie Pickles

Jaimie Pickles is co-founder and CEO at First Interpreter.

He was previously general manager, insurance, at Jornaya, which analyzes consumer leads for insurance and other industries.  Before that, he was president and founder of Canal Partner, a digital advertising technology company, and president of InsWeb, an online insurance marketplace.

Is There Risk in Embracing Insurtech?

How can the 2016 results at AIG be explained? Could the recent emphasis on technology, among other things, have caused a loss of focus?

sixthings
As insurers rush headlong into the digital scramble, they should keep in mind the proverbial iceberg. Not all the risks involved are strictly tied to the innovation itself. Certain ones are below the water level. Insurers actively participating in the digital revolution have done so in a variety of ways: 1) innovation labs, 2) insurtech accelerators with external partners, 3) investments in insurtech companies, 4) purchases of insurtech companies. These are reasonable approaches for staying current and competitive. However, there are some caveats that should be heeded. Focus Risk Insurance is not a simple business. Machines cannot be set to produce thousands of identical items, a sale is not final and competition is never at a low ebb. It is a complex business that relies on actuarial forecasting, capital models, complicated and multi-layered contracts, in many cases, and astute claims handling. Thus, companies must remain focused on the functions and metrics fundamental to the business, if they are to achieve good results. Over the years, the insurance industry has adapted to paradigm shifts of all types, for example: 1) automation of internal operations, 2) agent/broker electronic interface, 3) paperless environments, 4) increased transparency with regulators and 5) product development responsive to new risks such as cyber or supply chain disruption. Now, creating new ways to interact with stakeholders digitally and developing products that are fit for purpose in a digital world should be within the capability bounds of these same insurers. The caution is that insurers should not get so focused on their digital initiatives they lose proper sight of the basics of the business: underwriting, claims, actuarial, finance, customer service. Equally, insurers cannot lose sight of other disruptive forces in the environment such as climate change, terrorism and cyber threats. See also: Insurtech: Unstoppable Momentum   A piece appearing on AIR Wordwide’s website written by Bill Churney asks “Have You Lost Focus On Managing Catastrophe Risk?” He alludes to the fact that catastrophes have been light these past 10 years, which may cause inattention, and that many new insurance staffers were not working when Katrina, Andrew or Hugo hit, thus have no personal experience to tap for handling sizable events. A lack of focus on managing catastrophe risk could be critically detrimental for companies. And although there is nothing concrete to suggest that insurers have lost such focus, the question underscores the possibility of attention deficits. The need for continuous and careful attention to the rudimentary aspects of the business cannot be dismissed, even if they may not seem as exciting or timely as digital inventions. Within memory, there have been companies that allowed themselves to lose necessary focus. Some got so focused on mergers and acquisitions that core functions were not managed properly while the emphasis was on cross sales and economies of scale. Some got so intent on improving customer relations that business imperatives were ignored in favor of appeasing the customer, and some got so diversified that senior management did not have the bandwidth to manage the whole enterprise. How can the 2016 results at AIG be explained? Could the more recent focus on divestitures, staff changes and cuts, a drive to return dividends to shareholders and the CEO’s reported concentration on technology have caused it to lose its once unparalleled focus on profitable underwriting, rigorous claims handling and product innovation. Investment Risk With investments pouring into insurtech, it raises the question: What is left for anything else? Despite fintech investments starting to slow, KPMG reports, "There was a dramatic increase in interest in insurtech in Q3’16, and the trend is expected to continue. The U.S. was the top country in Q3’16 with 10 insurtech deals, valued at $104.7 million in total.” These numbers do not capture the many millions of dollars that insurers are investing in insurtech activities internally, of-course. As mentioned above, they are spending money to create dedicated innovation labs and accelerator programs and to launch other types of speculative insurtech projects. Many older projects have become operational, including new business unit or company startups, the introduction of bots on company websites, telematics in vehicles, digitized claims handling...and the list goes on. How does an insurer know when an investment in insurtech is enough or too much, thereby negating other necessary investments required by functions such as underwriting, claims or actuarial? The caution is not about doing an ROI (return on investment) analysis for a specific project. It is about doing an ROI analysis for the portfolio of projects that are vying for funding vis-a-vis the need to keep the company solvent while maintaining progress with the digital strategy. The larger the insurer, the more used it is to managing multiple priorities and projects. For mid-size to small insurers, this skill may be less developed, and they may face even greater risk of getting out of balance. Growth Risk Insurance is one of the few industries for which growth can be just as risky as no growth. Industry pundits have long claimed that new business performs about three points worse than policies already on the books. The difference between a company at a combined ratio of 99 compared with 102 can be quite significant. The causes for this phenomenon have to do with such factors as: 1) the potential for adverse selection, 2) the reason customers choose to change carriers and 3) the costs associated with putting new business on the books. These are not the only ones. It is harder for actuaries to predict the loss patterns for groups of customers for whom there is no history in the company’s database. See also: Infrastructure: Risks and Opportunities   If the reason for investing in insurtech is to increase new business written, insurers should be cautious about how much and what kind of new business they will write because of their insurtech enhancements. To the extent that insurtech enables insurers to hold on to existing business, the outcome is less risky. For example, it remains to be seen whether drivers who want to buy insurance by the mile are a better or worse risk pool than other drivers, whether those involved in the sharing economy, such as renting rooms in their homes, are more or less prone to loss than homeowners who do not rent rooms. Are small businesses that are willing to buy their coverage on-line likely to file a higher number of claims or a lower number compared with small businesses who use an agent? Do insurance buyers who are attracted to peer-to-peer providers have loss experiences at a different rate than those who are not attracted to such a model? Conclusion The march toward more digitization in the insurance industry will and must go forward. At the same time, insurers should be wise enough to realize and address underlying risks inherent in this type of aggressive campaign to modernize.

Donna Galer

Profile picture for user DonnaGaler

Donna Galer

Donna Galer is a consultant, author and lecturer. 

She has written three books on ERM: Enterprise Risk Management – Straight To The Point, Enterprise Risk Management – Straight To The Value and Enterprise Risk Management – Straight Talk For Nonprofits, with co-author Al Decker. She is an active contributor to the Insurance Thought Leadership website and other industry publications. In addition, she has given presentations at RIMS, CPCU, PCI (now APCIA) and university events.

Currently, she is an independent consultant on ERM, ESG and strategic planning. She was recently a senior adviser at Hanover Stone Solutions. She served as the chairwoman of the Spencer Educational Foundation from 2006-2010. From 1989 to 2006, she was with Zurich Insurance Group, where she held many positions both in the U.S. and in Switzerland, including: EVP corporate development, global head of investor relations, EVP compliance and governance and regional manager for North America. Her last position at Zurich was executive vice president and chief administrative officer for Zurich’s world-wide general insurance business ($36 Billion GWP), with responsibility for strategic planning and other areas. She began her insurance career at Crum & Forster Insurance.  

She has served on numerous industry and academic boards. Among these are: NC State’s Poole School of Business’ Enterprise Risk Management’s Advisory Board, Illinois State University’s Katie School of Insurance, Spencer Educational Foundation. She won “The Editor’s Choice Award” from the Society of Financial Examiners in 2017 for her co-written articles on KRIs/KPIs and related subjects. She was named among the “Top 100 Insurance Women” by Business Insurance in 2000.

7 Steps for Inventing the Future

If we can escape the focus on incremental innovation, computing pioneer Alan Kay says, we can gain “unbelievable leverage on the universe.”

sixthings
Alan Kay is widely known for the credo, “The best way to predict the future is to invent it.” For him, the phrase is not just a witty quip; it is a guiding principle that has yielded a long list of accomplishments and continues to shape his work. Kay was a ringleader of the exceptional group of ARPA-inspired scientists and engineers that created an entire genre of personal computing and pervasive world-wide networking. Four decades later, most of the information-technology industry and much of global commerce depends on this community’s inventions. Technology companies and many others in downstream industries have collectively realized trillions of dollars in revenues and tens of trillions in market value because of them. Alan Kay made several fundamental contributions, including personal computers, object-oriented programming and graphical user interfaces. He was also a leading member of the Xerox PARC community that actualized those concepts and integrated them with other seminal developments, including the Ethernet, laser printing, modern word processing, client-servers and peer-peer networking. For these contributions, both the National Academy of Engineering and the Association of Computing Machinery have awarded him their highest honors. I’ve worked with Alan to help bring his insights into the business realm for more than three decades. I also serve on the board of Viewpoints Research Institute, the nonprofit research organization that he founded and directs. Drawing on these vantage points and numerous conversations, I’ll try capture his approach to invention. He calls it a method for “escaping the present to invent the future,” and describes it in seven steps:
  1. Smell out a need
  2. Apply favorable exponentials
  3. Project the need 30 years out, imagining what might be possible in the context of the exponential curves
  4. Create a 30-year vision
  5. Pull the 30-year vision back into a more concrete 10- to 15-year vision
  6. Compute in the future
  7. Crawl your way there
Here's a summary of each step: 1. Smell out a need “Everybody loves change, except for the change part,” Kay observes. Because the present is so vivid and people have heavy incentives to optimize it, we tend to fixate on future scenarios that deliver incremental solutions to existing problems. To reach beyond the incremental, the first step to inventing the future is deep “problem finding,” rather than short-term problem solving. Smell out a need that is trapped by incremental thinking. In Alan’s case, the need that he sensed in the late '60s was the potential for computers to redefine the context of how children learn. Prompted by conversations with Seymour Papert at MIT and inspired by the work of Ivan Sutherland, J.C.R. Licklider, Doug Engelbart and others in the early ARPA community, Kay realized that every child should have a computer that helps him or her learn. Here's how he described the insight:
It was like a magnet on the horizon. I had a lot of ideas but no really cosmic ones until that point.
This led Kay to wonder how computers could form a new kind of reading and writing medium that enabled important and powerful ideas to be discussed, played with and learned. But, the hottest computers at the time were IBM 360 mainframes costing millions. The use of computers in educating children was almost nonexistent. And, there were no such things as personal computers. 2. Apply favorable exponentials To break the tyranny of current assumptions, identify exponential improvements in technological capabilities that could radically alter the range of possible approaches. In 1965, Gordon Moore made his observation that computing would dramatically increase in power, and decrease in relative cost, at an exponential pace. Moore’s prediction, which would become known as Moore’s Law, was the “favorable exponential” that Kay applied. Today, the fruits of Moore’s Law such as mobile devices, social media, cloud computing, big data, artificial intelligence and the Internet of Things continue to offer exponential advances favorable for invention. As I’ve previously written, these are make-or-break technologies for all information-intensive companies. But, don’t limit yourself to those. Kay is especially optimistic about the favorable exponential at the intersection of computer-facilitated design, simulation and fabrication. This is the process of developing concepts and ideas using computer design tools and then testing and evolving them using computer-based simulation tools. Only after extensive testing and validation are physical components ever built, and, when they are, it can be done through computer-mediated fabrication, including 3D printing. This approach applies to a wide range of domains, including mechanical, electrical and biological systems. It is becoming the standard method for developing everything, including car parts and whole cars, computer algorithms and chips, and even beating nature at its own game. Scientists and engineers realize tremendous benefits in terms of the number of designs that can be considered and the speed and rigor with which they can do so. These allow, Kay told me, “unbelievable leverage on the universe.” See also: To Shape the Future, Write Its History   3. Project the need 30 years out and imagine what might be possible in the context of the exponential curves 30 years is so far in the future that you don’t have to worry about how to get out there. Focus instead on what is important to have. There’s no possibility of being forced to demonstrate or prove how to get there incrementally. Asking “How is this incremental to the present?” is the “biggest idea killer of all time,” Kay says. The answer to the “incremental” question is, he says, is “Forget it. The present is the least interesting time to live in.” Instead, by projecting 30 years into the future, the question becomes, “Wouldn’t it be ridiculous if we didn’t have this?” Projecting out what would be “ridiculous not to have” in 30 years led to many visionary concepts that earned Kay wide recognition as “the father of the personal computer.” He was sure, for example, that children would have ready access to laptop and tablets by the late 1990s — even though personal computers did not yet exist. As he saw it, there was a technological reason for it, there were user reasons for it and there were educational reasons for it. All those factors contributed to his misty vision, and he didn’t have to prove it because 30 years was so far in the future. How might the world look relative to the needs that you smell out? What will you have ready access to in a world with a million times greater computing power, cheap 3D fabrication, boundless energy and so on? Remember, projecting to 2050 is intended as a mind-stretching exercise, not a precise forecasting one. This is where romance lives, albeit romance underpinned by deep science rather than pure fantasy. 4. Create a 30-year vision A vision is different from a mission or a goal. If the previous step was about romance, a 30-year vision is more like a dream. It is a vague picture of a desirable future state of affairs in that 30-year future. This is the step where Kay’s recognition that computers would be widely available by the late 1990s turned into a vision of what form those computers might take. That vision included the Dynabook, a powerful and portable electronic device the size of a three-ring notebook with a touch-sensitive liquid crystal screen and a keyboard for entering information. Here’s one of Kay’s early sketches of the Dynabook from that time. [caption id="attachment_25314" align="alignnone" width="450"] DynaBook Concept Drawing[/caption] The next illustration is Kay’s sketch of the Dynabook in use. He describes the scenario as two 12-year-olds learning about orbital dynamics from a version of “Space Wars” that they wrote themselves. They are using two personal Dynabooks connected over a wireless network. [caption id="attachment_25301" align="alignnone" width="500"] Children Using Dynabooks[/caption] Kay’s peers in the ARPA community had already envisioned some of the key building blocks for the Dynabook, such as LCD panels and an Internet-like, worldwide, self-healing network. (For a fascinating history of the early ARPA community, see Mitchell Waldrop’s brilliant book, "The Dream Machine.") For Kay, these earlier works crystallized into the Dynabook once he thought about them in the context of children’s education. As he described it,
The Dynabook was born when it had that cosmic purpose.
Laptops, notebook computers and tablets have roots in the early concepts of the Dynabook. 5. Pull the 30-year vision back into a 10- to 15-year lesser vision Kay points out that one of the powerful aspects of computing is that, if you want to live 10 to 15 years in the future, you can do it. You just have to pay 10 to 20 times as much. That’s because tomorrow’s everyday computers can be simulated using today’s supercomputers. Instead of suffering the limitations of today’s commodity computers (which will be long obsolete before you get to the future you are inventing), inventors should use customized supercomputers to prototype, test and evolve aspects of their 30-year vision. Pulling back into the 10- to 15-year window brings inventors back from the “pie in the sky” to something more concrete. Jumping into that “more concrete” future is exactly what Alan Kay did in 1971 when he joined the Xerox Palo Alto Research Center (PARC) effort to build “the office of the future.” It started with Butler Lampson and Chuck Thacker, two of PARC’s leading engineers, asking Kay, “How would you like us to build your little machine?” The resulting computer was an “interim Dynabook,” as Kay thought of it, but better known as the Xerox Alto. [caption id="attachment_25302" align="alignnone" width="500"] Xerox Alto[/caption] The Alto was the hardware equivalent of the Apple Macintosh of 1988, but running in 1973. Instead of costing a couple of thousand dollars each, the Alto cost about $70,000 (in today’s dollars). PARC built 2,000 of them — thereby providing Kay and his team with the environment to develop the software for a 15-year, lesser-but-running version of his 30-year vision. 6. Compute in the future Now, having created the computing environment of the future, you can invent the software. This approach is critical because the hardest thing about software is getting from requirements and specification to properly running code. Much of the time spent in developing software is spent optimizing code for the limitations of the hardware environment—i.e., making it run fast enough and robust enough. Providing a more powerful, unconstrained futuristic computing environment frees developers to focus on invention rather than optimization. (This was the impetus for another Kay principle, popularized by Steve Jobs, that “People who are really serious about software should make their own hardware.”) The Alto essentially allowed PARC researchers to simulate the laptop of the future. Armed with it, Kay was a visionary force at PARC. Kay led the Learning Research Group at PARC, and, though PARC’s mission was focused on the office environment, Kay rightly decided that the best path toward that mission was to focus on children in educational settings. He and his team studied how children could use personal computers in different subject areas. They studied how to help children learn to use computers and how children could use computers to learn. And, they studied how the computers needed to be redesigned to facilitate such learning. [caption id="attachment_25303" align="alignnone" width="500"] Children With Xerox Alto[/caption] The power of the Alto gave Kay and his team, which included Adele Goldberg, Dan Ingalls, Ted Kaehler and Larry Tesler, the ability to do thousands of experiments with children in the process of understanding these questions and working toward better software to address them.
We could have a couple of pitchers of beer at lunch, come back, and play all afternoon trying out different user interface ideas. Often, we didn’t even save the code.
For another example of the “compute in the future” approach, take Google’s driverless car. Rather than using off-the-shelf or incrementally better car components, Google researchers used state of the art LIDAR, cameras, sensors and processors in its experimental vehicles. Google also built prototype vehicles from scratch, in addition to retrofitting current cars models. The research vehicles and test environments cost many times as much as standard production cars and facilities. But, they were not meant for production. Google’s researchers know that Moore’s Law and other favorable exponentials will soon make their research platforms practical. Its “computing in the future” platforms allow Google to invent and test driving algorithms on car platforms of the future today. Google greatly accelerated the state of the art of driverless cars and ignited a global race to perfect the technology. Google recently spun off a separate company, Waymo, to commercialize the fruits of this research. Waymo’s scientists and engineers are learning from a fleet of test vehicles driving 10,000 to 15,000 miles a week on public roads and interacting with real infrastructure, weather and traffic (including other drivers). The developers are also taking advantage of Google’s powerful cloud-based data and computing environment to do extensive simulation-based testing. Waymo reports that it is running its driving algorithms through more than three million miles of simulated driving each day (using data collected by its experimental fleet). See also: How to Master the ABCs of Innovation   7. Crawl your way there Invention requires both inspiration and perspiration. Inspired by this alternative perspective of thinking about their work, researchers can much more effectively channel their perspiration. As Kay is known for saying, “Point of view is worth 80 IQ points.” PARC’s success demonstrates that even if one pursues a 15-year vision — or, more accurately, because one pursues such a long-term vision — many interim benefits might well come of the effort. And, while the idea of giving researchers 2,000 supercomputers and building custom software environments might seem extravagant and expensive, it is actually quite cheap when you consider how much you can learn and invent. Over five glorious years in the early 1970s, the work at PARC drove the evolution of much of future computing. The software environment advanced to become more user-friendly and supportive of communications and different kinds of media. This led to many capabilities that are de rigueur today, including graphical interfaces, high quality bit-mapped displays, what-you-see-is-what-you-get (WYSISYG) word processing and page layout applications. The hardware system builders learned more about what it would take to support future applications and also evolved accordingly. This led to hardware designs that better supported the display of information, network communications and connecting to peripherals, rather than being optimized for number crunching. Major advancements included Ethernet, laser printing, peer-to-peer and client server computing and internetworking. Kay estimates that the total budget for the parts of Xerox PARC that contributed to these inventions was about $50 million in today’s dollars. Compare that number to the hundreds of billions of dollars that Xerox directly earned from the laser printer. [caption id="attachment_25315" align="alignnone" width="500"] Xerox 9700 Printers[/caption] Although the exact number is hard to calculate, the work at PARC also unlocked trillions reaped by other technology-related businesses. One of the most vivid illustrations of the central role that Xerox played was a years-later interchange between Steve Jobs and Bill Gates. In response to Jobs’ accusation that Microsoft was stealing ideas from the Mac, Gates tells him:
Well, Steve, I think there’s more than one way of looking at it. I think it’s more like we both had this rich neighbor named Xerox, and I broke into his house to steal the TV set and found out that you had already stolen it.
Kay cautions that his method is not a cookbook for invention. It is more like a power tool that needs to be wielded by skilled hands. It is also a method that has been greatly enabled by Kay and his colleagues’ inventions. Beyond the technology industry that they helped spawned, their inventions also underscore discovery and innovation in every field of science and technology, including chemistry, biology, engineering, health and agriculture. Information technology is not only a great invention; it has reinvented invention. It powers the favorable exponential curves upon which other inventors can escape the present and invent the future. See also: How We’re Wired to Make Bad Decisions For his part, Kay continues to lead research at the frontiers of computing, with a continued emphasis on human advancement. In addition to his Viewpoints Research Institute, he recently helped to formulate the Human Advance Research Community (HARC) at YC Research, the non-profit research arm of Y Combinator. HARC’s mission is “to ensure human wisdom exceeds human power, by inventing technology that allows all humans to see further and understand more deeply.” That is a future worth inventing.

The Brewing Crisis Over Jobs

This should be the most innovative decade in history, and must be if we’re going to avoid a Mad Max dystopia in favor of a Star Trek future.

sixthings

Everyone has heard the old anecdote about the frog in a pot of water. If the temperature is raised slowly, the frog won’t react, eventually allowing itself to get boiled. That’s where we’re heading as a country when it comes to technological advances and the threat they pose to millions of jobs. Seemingly every day, there are new stories in the media about artificial intelligence, data and robotics — and the jobs they threaten in retail, transportation, carrier transport and even the legal profession. Yet no one is jumping out of the pot. Let’s be clear: This is not science fiction. In just recent days, there have been articles on Amazon’s automation ambitions, described by the New York Times as “putting traditional retail jobs in jeopardy,” and on the legal profession bracing for technology taking over some tasks once handled by lawyers. As reported in Recode, a new study by the research firm PwC found that nearly four out of 10 jobs in the U.S. could be “vulnerable to replacement by robots in the next 15 years.” Many of those will be truckers, among the most common jobs in states across the country. See also: Why Trump’s Travel Ban Hurts Innovation   Yet when President Trump hosted truck drivers at the White House recently, he dedicated his remarks to the threat of healthcare without uttering a word about the advanced driverless semi fleets that will soon replace them. His Treasury Secretary Steven Mnuchin shockingly said in an interview last week that we’re “50 to 100 years” away from artificial intelligence threatening jobs. It’s easy for sensationalist headlines about AI to dominate, like those about Elon Musk’s warning that it poses an existential threat. Yet the attention of people such as Musk, Bill Gates and Stephen Hawking should be a signal to Trump and Mnuchin that AI and related robotics and automation are moving at a far faster clip than they are acknowledging. It should be on the administration’s radar screen, and officials should be jumping out of the boiling water. Solutions won’t come easy. Already some experts suggest a universal basic income will be necessary to offset the job losses. We also have to help our workforce make the transition. Educational institutions such as Miami-Dade College and Harvard University have introduced advanced programming courses that take students from zero to six programming languages on a fast track. More needs to be done. This should be the most innovative decade in human history, and it has to be if we’re going to avoid a Mad Max dystopia in favor of a Star Trek future. Of course, there are those who say similar warnings were raised as technology revolutionized agriculture and other industries along the way. They might argue that then, as now, those advances led to more jobs. We would all welcome that and the potential these changes will represent for improving lives. See also: Can Trump Make ‘the Cyber’ Secure?   Technological advances could greatly reduce the cost of living, make housing more affordable and solve some of the biggest challenges whether in energy or long-term care, an issue painfully familiar to so many families. It may also help improve quality of life in the long term, as men and women gain greater flexibility to spend time with loved ones rather than dedicating 40 or more hours a week to working and so many others commuting. In the near term, however, the job losses that are possible could inflict tremendous economic pain. We are far from where we need to be. That will continue to be the case until policymakers, educators and innovators come together to address the reality before us. We won’t solve this overnight, but we can’t afford to wait until it’s too late. This was written by Vivek Wadhwa and Jeff Greene.


Vivek Wadhwa

Profile picture for user VivekWadhwa

Vivek Wadhwa

Vivek Wadhwa is a fellow at Arthur and Toni Rembe Rock Center for Corporate Governance, Stanford University; director of research at the Center for Entrepreneurship and Research Commercialization at the Pratt School of Engineering, Duke University; and distinguished fellow at Singularity University.

What Your Broker Should Be Asking You

There are two main things you can do to ensure you get the proper business insurance coverage at the best rates for your company.

sixthings
When you’re running a business, insurance is one of those things that you know you have to have, but you don’t want to think about too much. There are so many different types of policies, state regulations and industry requirements that trying to determine what you need can be overwhelming, and that doesn’t even include trying to get the best rates once you settle on a policy. Fortunately, insurance brokers will manage all of that for you. But to do so, they need to know a lot about your business. While you may not want to know all the details and intricacies of securing the various business insurance policies that you need, you want to be sure the insurance broker handling things for you can get you comprehensive coverage at the best possible rates. What You Want to Hear The primary thing you want to hear from your broker is a lot of questions. The more details the broker has about the way your business is structured, the sorts of policies and procedures you have in place and the types of risks and exposures your workers encounter, the better able he is to find you the best policy matches. See also: No, Brokers Are Not Going Away   Of course, the kinds of questions the broker asks matters, too. Some of the basics may seem obvious, like how many worker’s compensation claims you’ve had in the past and what types of safety programs you have in place. Some other important questions might not seem relevant right away, however. Some examples of these include:
  • What are your hiring and HR practices?
  • What types of clients do you typically serve?
  • Exactly what kinds of lifting activities do your employees perform and how frequently?
  • Do you have a program in place to aid injured employees in returning to work?
  • How long have you been in business?
  • Do your employees ever have to work at great height or underground, and for how long?
  • What type of experience does your management team have?
  • Is your work seasonal?
  • What is your current loss control policy?
Not all of these questions may be necessary depending on the type of coverage you’re seeking, but in general, the more your broker asks about your business, the better. Why It Matters While it may be tedious at times, it’s important to provide your broker with as many details as possible in response to any questions asked. That’s because the methods for calculating coverages and premiums take into account a wide range of data that insurance companies use to determine your risk for various types of claims. Your broker will use the details about your company to develop a profile of your business, and then use that when comparing rates and options available through various insurers. Finding the best fit for you, and generally the lowest rates, is your broker’s goal, and to accomplish it, the broker needs all information that could potentially affect those rates, both positively and negatively. To ultimately determine your premiums, this information will be combined with data on accident frequency in your industry, state regulations and guidelines and experience modifiers, and the more you can give, the better the chance that your rate will be fair and appropriate for your risk. What You Can Do There are two main things you can do to ensure you get the proper business insurance coverage at the best rates for your company. The first one is to choose a knowledgeable and experienced broker that you have confidence in. You may want to speak to a few before settling on one, and a lot of what you should evaluate them on is the type of questions they ask about your business. See also: Will You Be the Broker of the Future?   You should also take the time to gather as much detailed information as possible in answer to those questions. The broker's job is to represent you to the insurance companies in the best possible light, and only you can provide the tools needed to do that.

Nick Roumi

Profile picture for user NickRoumi

Nick Roumi

Nick Roumi is founder and CEO of Radius Insurance. He was named twice on the prestigious INC 500 fastest growing businesses in America and was nominated as Entrepreneur-Of-The-Year by Ernest and Young. Roumi loves solving problems and enjoys helping fellow business owners tackle their risk and operational challenges.

The Need for Agile, Collaborative Leaders

Change has arrived in the insurance industry, and it has decided to stay a while and get comfortable. Leaders must adapt--constantly.

sixthings
Change has arrived in the insurance industry—and it has decided to stay a while and get comfortable. In this recap of a general session from The Institutes CPCU Society 2016 Annual Meeting, industry executives spoke about the ramifications of today’s landscape of change and the need for insurance and risk management professionals to stay on top of the latest technologies and technological issues, such as cyber risk, and to embrace ACE: agility, collaboration and education. The insurance and risk management industry is constantly evolving and, like many of the industries it insures, is currently at an introspective point, as change is all around. Much of this change is driven by the increasing use of technology, which is having a profound effect on businesses, individuals and society as a whole. Cyber risk, for example, is a major concern for insurers. With nearly half of insurance professionals planning to retire in the next 10 years, developing a new generation of leadership is essential. To be an insurance and risk management leader in today’s environment requires continuing education. That was the consensus of a panel of industry chief executive officers who discussed at The Institutes CPCU Society 2016 Annual Meeting in Hawaii emerging trends in insurance and risk management and how to be a leader in the modern work environment. I had the pleasure of moderating the panel, “CEO Conversations, Becoming a Leader,” which included Jeffrey Bowman, FCCA, senior adviser in Deloitte Insurance Consulting Practice and chairman of The Institutes’ Board of Trustees; Albert “Skip” Counselman, CPCU, chairman and chief executive officer of Riggs, Counselman, Michaels & Downes; Alan Krapf, CPCU, president of the Property and Casualty Insurance Group at USAA; and Christine Sears, CPA, CPCU, president and chief executive officer of Penn National Insurance. See also: Best Insurance? A Leadership Pipeline   All of the panelists have experience managing change in the industry and implementing new technologies, regulations and working practices. As great leaders themselves, they have helped others grow into leadership roles within their own organizations. They also serve as board members of The Institutes, which has given me the pleasure of knowing them for many years. The panelists agreed that for the industry and its professionals, honing critical thinking skills and maintaining knowledge of emerging issues—such as growing technology and data analytics—and then being able to use and apply that knowledge are critical to future success. Regardless of professionals’ comfort level with technology, lifelong learning about it, as well as about economics, societal changes and other new developments, is vital to the advancement of both their careers and the industry. Understanding New Technology In regard to new technology, the panelists noted that, though it can help facilitate communications, analysis and efficiency, it also poses a large risk. For example, Bowman said that understanding, preventing and insuring cyber risk is a major concern that professionals are still trying to determine how to insure. Because it is evolving quickly, is very complicated and has many elements, “nobody really has this right at the moment,” he said, adding that companies also have to be aware of third-party risks: “It brings in a whole realm of issues around compliance, regulation and governance that everybody has to be aware of.” Counselman noted that cyber crime does not discriminate, but affects everyone: individuals, large businesses and small businesses. “You can buy insurance, you can transfer the risk, but transferring the risk isn't the entire answer,” he said. “What's really the answer is being vigilant and educated, learning and trying to stay one step ahead. And that's the message we have to get across, because just as we thought about fire insurance and general liability insurance for years and years as being the mainstay of what we were doing and telling our clients about, this cyber risk can shut down a client and put a client out of business very quickly if the appropriate safeguards aren't enforced.” Chief among corporate cyber risks is reputational risk. Krapf said: “It's not just about protecting the data and the financials. It’s also about the brand. How do I protect the reputation of my company, too?” Sears added that reputational damage from cyber crimes can cause billions of dollars in damage. “What is really key is that you have a plan in place for when that happens,” she said. “And so, all of us should have a crisis management plan in place so we know that when it happens—because it really is more a matter of when—we know exactly what the processes are that we're going to follow.” Accordingly, she said, companies should have a plan in place to quickly handle a public relations crisis. ACE in the Hole: Remaining Agile, Collaborative and Educated The panelists all agreed that, to address the rapid changes in technology and other spheres, continuing education and agility are essential. “Really, what is happening today is a fourth industrial revolution: technology in the insurance industry,” Bowman said. “To deal with the changes that are coming in and the changes that have to happen within organizations, you have to have qualified staff.” Panelists also discussed how collaboration across departments is key to dealing with the fast pace of technological change. “To be successful in observing and understanding change, deciding what to do about change and implementing change, you need to collaborate today,” Counselman said. “You can’t just make your own plans within your one division or within your one department. You need to collaborate. You need to have input from people who might be involved on a daily basis in property-casualty coverages and risk management advice, in IT advice and financial planning. You need all of that, and you need to be effective at giving everyone the opportunity to understand the issue that you’re trying to approach and determine your strategy—and you need that input across divisions.” Diversity can enhance collaboration, the panelists asserted. With a diverse workforce comes diverse perspectives, which can aid in everything from product development, customer relationships and risk management. “Diversity lets you come up with richer and better decisions and allows you to come up with an answer that’s not just the answer that’s always been out there,” Krapf said. Allowing Professionals to Shine Part of facilitating collaboration across departments is the move to more decentralized organizations. Decentralized organizations are often flatter and less bureaucratic, thereby helping empower employees to be more involved in decision-making processes. Krapf added that institutional success further depends on a clear explanation of the mission. “You have to make sure you’re clear with all of your employees about what you are trying to accomplish and then let them make decisions.” To be well-equipped to make proper decisions in today’s rapidly changing landscape, insurance professionals must continue to learn. Gaining information and ensuring a solid understanding of that information are competitive advantages in the workplace. This idea was reinforced by Sears, who said, “Lifelong learning is absolutely what got me to the position that I’m in today.” With nearly half of insurance professionals expected to retire from the industry in the next decade, the industry needs insightful and capable new professionals. The good news for the industry, and specifically CPCUs, is that they have proven their commitment to lifelong learning and staying on top of industry issues. Changes in insurance, business and society present both opportunities and challenges for ensuring professional growth and leadership development and for grooming a generation of professionals with different working styles. From the panel’s perspective, insurance professionals are clearly going to have to work harder than ever to keep up with new developments and best practices and to develop creative solutions. This will enable them to thrive within the industry’s dynamic work environment and help the industry evolve. See also: Better Way to Think About Leadership   Looking out from my moderator’s chair at the hundreds of new and veteran CPCUs in the audience, meeting with many more at the CPCU Society Annual Meeting and interacting daily with members of the industry, I am optimistic about the future and excited about the opportunities in front of all of us. The insurance industry plays a vital role in making people’s lives easier. Insurance offers the promise that, if you pay your premiums, you will be protected from certain forms of catastrophic risk, thereby allowing you to engage in risk management. Through mutual trust, insurance also provides the peace of mind needed for families to buy a house or car, entrepreneurs to start a business and large companies to expand overseas. In this way, insurance helps oil the wheels of the economy. As holders of the industry’s premier designation, CPCUs are the insurance industry’s natural leaders and role models for continuing education. To this point, Counselman told attendees, “The most important thing you can do is commit yourself to lifelong learning. Getting your CPCU designation is only the beginning.” CPCUs should take great pride in their industry, hard work and accomplishments to date. There will be many opportunities ahead. I encourage CPCUs to raise their hands and seek these out. Find a mentor. And always keep learning.