Download

Flood Insurance: Are the Storm Clouds Lifting?

sixthings

With levees breaking and floods covering much of the Midwest, and with record snow in the Sierra Nevadas soon to start melting and rushing into California's waterways, let's look at where we stand on the renewal of the National Flood Insurance Program. There is reason for optimism about the NFIP, currently set to expire May 31, but there are also significant caveats.

The optimism stems from the fact that the NFIP is finally moving into the 21st century in terms of technology.

Established by Congress in 1968, the program was designed to ensure that affordable flood insurance is available to property owners in vulnerable areas, while requiring that all those in participating communities buy insurance if any financing is in place on the property. But the decisions on vulnerability have been crude. Basically, someone pulls out a paper map, and if it says "flood plain" then you have to buy insurance. If not, not, no matter whether you're on top of a hill in that flood plain or in a low area in a location not generally deemed vulnerable. 

Now, the easy availability of cameras, drones and more sophisticated mapping tools makes it possible to incorporate precise measurements of elevation and determine vulnerability by individual property, not through some decision that three or four square miles constitute a flood plain. In this podcast I recorded with Nick Lamparelli, cofounder and chief underwriting officer of reThought Insurance, he says it's actually possible to take the precision far further: You can, for instance, price flood insurance based not just on how high above ground expensive possessions, such as computer servers, are in a building but on whether they are on a vulnerable side of the building or in a more protected area. Even in today's dysfunctional Washington, the advances in technology are finding their way into the hearings on the NFIP.

Getting the NFIP to deal with models far closer to reality could help with pricing, which has failed to keep up with claims for 15 years. In 2017, for instance, the NFIP took in $3.6 billion in premiums and paid $8.7 billion in claims—not exactly a sustainable situation. Congress forgave $16 billion of NFIP debt in 2016, but the program still is more than $20 billion in the hole. Having prices reflect the actual risk would eliminate the deficits (which have essentially become subsidies by taxpayers for those in vulnerable areas).

But here is where the caveats begin.

Roughly half of those in the NFIP in 2009 have dropped out because they found the prices to be too high. Now imagine that prices rise for the vast majority of people.

I can tell you one thing that will happen: People will complain. And they will find advocates to take up their cause, whether in media or in the halls of government. Already, Senate Minority Leader Chuck Schumer has sounded the alarm on behalf of constituents on Long Island. (Those who benefit from price reductions will quietly pocket their gains.)

Although the NFIP renewal effort has been solidly bipartisan thus far, it's easy to see how divisions might arise. Better understanding of risk will make it clear just what areas have been getting subsidies, and, while both parties might rally behind subsidies for a purple state like Florida, what about a red state like Texas or a blue state like New York? In addition, there is talk of subsidizing insurance for those below a certain income level -- not always an area of common ground for Republicans and Democrats. And don't look now, but the House committee responsible for NFIP legislation is currently run by Maxine Waters, a frequent target of the president's Twitter-ing thumbs.

Even if the NFIP is finally renewed for good, after repeated extensions since 2017, the work won't be done. Climate change will cause new vulnerabilities that will force flood insurance to continue to adapt. 

So, rays of sunshine are breaking through the clouds, but we'll certainly face another storm when a new NFIP pricing model takes effect, and that may be just one of many we still have to outlast. 

Cheers,

Paul Carroll
Editor-in-Chief


Paul Carroll

Profile picture for user PaulCarroll

Paul Carroll

Paul Carroll is the editor-in-chief of Insurance Thought Leadership.

He is also co-author of A Brief History of a Perfect Future: Inventing the Future We Can Proudly Leave Our Kids by 2050 and Billion Dollar Lessons: What You Can Learn From the Most Inexcusable Business Failures of the Last 25 Years and the author of a best-seller on IBM, published in 1993.

Carroll spent 17 years at the Wall Street Journal as an editor and reporter; he was nominated twice for the Pulitzer Prize. He later was a finalist for a National Magazine Award.

Smart Home Devices: the Security Risks

Smart devices often represent the most vulnerable point on any given network, exposing customers and insurers alike to potential risks.

Smart devices have become a popular topic in the P&C insurance world. Tools like smart thermostats, smoke detectors and water sensors offer the potential to halt property damage before it starts, protecting insurance customers from injury, property loss or both. Yet these devices come with risks.

Smart devices often represent the most vulnerable point on any given network, exposing customers and insurers alike to potential risks. Insurance companies that understand these risks are better-poised to protect both customers and themselves.

The Rising Trend of Smart Device Use

Smart home devices were a wildly popular gift during the 2018 holiday season. Amazon broke records for sales of its Echo and Alexa devices, Voicebot’s Bret Kinsella says. Sales of smart sensors, security systems, wearable devices and smart toys were also strong.

Currently, the most common smart devices used in private homes are televisions and digital set-top boxes, says Gartner research director Peter Middleton. Initially more popular among businesses, tools like smart electric meters and security cameras are becoming more popular among homeowners.

As more people use smart devices, insuring these devices becomes more important. Even Amazon has announced an interest in offering homeowners insurance to complement its smart devices like Alexa speakers and Ring Alarm systems, says Julie Jacobson at CEPro.

Growing Security Concerns for the Internet of Things

As reports of data theft, hacking and other malfeasance reach the news, concerns about security and privacy in the smart device realm grow. For instance, a distributed denial of service (DDoS) attack in 2016 incapacitated websites for internet users across the East Coast of the U.S. The attack was launched from an army of smart devices conscripted by malware, says Lisa R. Lifshitz, who works in internet law and cybersecurity. In this attack, many of the device’s owners didn’t even know they were involved.

These events have raised concerns about device security among both government regulators and private device owners. Insurers seeking to offer smart devices to customers can play a role, as well.

See also: Smart Home = Smart Insurer! 

Laws and Regulations Address Smart Device Security

Most laws and regulations to address smart device security are still in their infancy. Although the U.K. introduced guidelines for improving IoT security in 2018, the guidelines remain voluntary. This means that not all manufacturers will adhere to them, says Rory Cellan-Jones, a technology correspondent for the BBC.

In September 2018, California became the first U.S. state to pass a law addressing smart device security. The bill sets minimum security requirements for smart device manufacturers selling their devices in California. It takes effect Jan. 1, 2020.

Rather than listing specific requirements, the California law sets a standard for determining whether security is reasonable. For instance, the security features must be appropriate to the device’s nature and function. They must also be designed to protect the device and its information from unauthorized access, modification or other forms of tampering, say Jennifer R. Martin and Kyle Kessler at Orrick.

Customer Interest in Security Has Increased

As smart devices become more popular, so do demands for greater security and privacy regulations. A 2018 study by Market Strategies International found that people who use smart devices at home or at work are twice as likely to believe that governments should regulate the devices.

“We believe that these workers have already seen the massive potential of the IoT and recognize that the risks - data security, privacy and environmental - are very real,” explains Erin Leedy, a senior vice president at Market Strategies.

With a sense of both the potential and the risks, smart device users become more interested in stronger regulations to protect privacy. A 2017 study by digital platform security firm Irdeto polled 7,882 smart device users in six different countries worldwide. Researchers found that 90% of those polled believe that smart devices need built-in security. Yet, respondents also said they too had a role to play in keeping themselves secure: 56% said that users and manufacturers share responsibility to prevent their devices from being hacked, security director Mark Hearn says.

Consumers understand that their devices can pose risks, and they’re willing to join the fight to protect their privacy and data security. Insurance companies can help them do so by providing the information they need to make smart decisions with smart devices.

Who Controls Your Customers’ Devices?

When today’s smart home devices were designed, the main goal was to simplify tasks and make life more efficient. Security took a backseat to functionality, Fortinet’s Steve Mulhearn says.

To function well, smart home devices must integrate seamlessly with other devices — meaning they’re often the weakest security point on a network. Hackers have noticed these weaknesses and are taking advantage of them. In August 2018, the Federal Bureau of Investigation issued a public service announcement warning that IoT devices could be hacked, conscripting them into malicious or illegal online activities.

“Everything from routers and NAS devices to DVRs, Raspberry Pis and even smart garage door openers could be at risk,” says Phil Muncaster at Infosecurity Magazine. While some devices are at higher risk than others, no smart device is totally safe from attempts to use it for ills like click fraud, spam emailing and botnet attacks.

Helping Customers Understand and Address Smart Device Risks

Most smart device users want to play a role in preventing privacy and security breaches. Yet, they don’t always know how to participate effectively. Helpnet Security managing editor Zeljka Zorz recommends that homeowners adopt smart devices only after asking and answering two questions:

  • Will the device improve the quality of my life/fill a need I have?
  • Am I satisfied with the level of security and privacy the manufacturer provides users?

Insurers seeking to incorporate smart devices into their business and their customers’ lives can help by providing answers to both questions. As Steve Touhill explains on the Resonate blog, demonstrating the usefulness of smart devices can help insurers attract new customers. Smart device owners are 42% more likely to change insurance companies in the coming year. They’re also more open to embracing insurers that offer smart device discounts or support.

Insurers can help customers protect themselves by providing information on privacy and security issues. Options include comparisons of security options for various devices, information on changing usernames and passwords, how-to guides for installing regular updates and checklists for spotting signs of cyber tampering.

When presented as best practices for using smart home devices, these steps can help homeowners and insurers address security risks without raising undue alarm. Property and casualty insurers that encourage smart device use play an important role in influencing how customers use their devices. While this relationship can be beneficial for both insurers and customers, insurers that enter it face further privacy and security complications.

Protecting Customer Privacy

Insurance companies will need to consider how to protect customer privacy while still gathering relevant data from smart home devices. This is because smart devices offer the potential to provide more data to insurance companies, changing everything from policy recommendations to underwriting accuracy, Mobiquity’s Sydney Fenkell says.

See also: How Smart Is a ‘Smart’ Home, Really?

Gathering this data requires insurance companies to be smart about protecting the privacy of customers and the security of the information received.

“It is not a matter of if but when these systems will be compromised, and the consequences could be much more severe than lost Social Security numbers,” says Dimitri Stiliadis, chief technology officer at Aporeto.

Moreover, P&C insurers will also need to protect their internal networks when communication with these devices presents a weak point.

Being Smart About Smart Device Data Use

The use of smart device data was recently brought to light by an announcement from the insurance company John Hancock. It made public the company’s intention to incorporate information from fitness wearables like the Garmin or FitBit into calculations of life insurance premiums. This raised a number of concerns with customers, says Chris Boyd, a MalwareBytes senior threat researcher who goes by the pseudonym paperghost. Boyd notes that these devices often have weak security, which means that a user’s personal data could be altered — affecting  insurance premiums.

Similar concerns arise for users seeking to link smart devices with their auto, homeowners or renters insurance. A hacked or malfunctioning device that reports multiple loss events, or that fails to report events that did happen, could affect customers’ insurance rates. Unless, however, human intervention in the system verified the event.

For insurers, one of the best early principles to adopt may be one of transparency, says Chris Middleton at Internet of Business. When consumers know what information their smart home device collects and transmits, and under what security protocols or safeguards, they are better-equipped to understand and use the device in a way that benefits both their interests and those of their insurer.


Tom Hammond

Profile picture for user TomHammond

Tom Hammond

Tom Hammond is the chief strategy officer at Confie. He was previously the president of U.S. operations at Bolt Solutions. 

An AI Road Map to the Future of Insurance

It is striking how many carriers cite difficulties in actually embedding AI, a technology they recognize as so integral to their future success. 

2018 saw unprecedented advances in the investment and deployment of artificial intelligence capabilities within the insurance industry, and 2019 promises to be no different. With the Insurance AI and Analytics USA Summit kicking off in Chicago, May 2-3, we spoke to some of our speakers to gauge their thoughts on the challenges that insurance carriers face in implementing AI. “AI is impacting insurance at an unprecedented pace,” says Bilal Parviz, vice president of product development at Arch Mortgage Insurance. Eugene Wen, Manulife VP of group advanced analytics, adds that, "With innovative new players entering the industry and the traditional players trying to catch up, the industry is going to experience further significant change.” William Dubyak, VP of analytics for product development and Innovation at USAA, says: “It’s impossible to open a magazine without seeing hype about analytics changing every aspect of your life." Although good progress has been made to date, there is a definite sense that we are only at the tip of the AI-iceberg. In the eyes of Chuck Gomez, Novarica VP of research consulting: “Each year, the topic of AI gets more interesting as emerging technology evolves and adoption rates go up, indicating that more can be accomplished with progress. While the subject of analytics has been around insurers for a while…there’s still a lot to learn about analytics centered around underwriting, claims and customer service.” See also: Future of Insurance Looks Very Different   Insurance AI and Analytics USA will address distinct applications of AI, and Thomas Sheffield, SVP, specialty claims, at QBE, who will speak on the claims track, says: “From a claims perspective, our next 10 or 20 years will be defined by how well we embrace technology, artificial intelligence and the nearly boundless opportunities that arise from those advancements.” Still, in conducting our research for the Insurance AI and Analytics USA agenda, it was striking how many carriers cited difficulties in actually embedding the technology they recognize as so integral to their future success.

Ira Sopic

Profile picture for user IraSopic

Ira Sopic

Ira Sopic is currently focused on how insurance carriers are integrating AI and advanced analytics into their existing processes to increase efficiency and revolutionize the way they work. This includes the key partnerships that the industry is creating and a clear picture of how the future will be shaped.

Marsh/JLT: What Happens Next?

With the E.U. having approved the deal, only one question remains: Just how much of a train wreck will the Marsh/JLT merger be?

With the E.U. having approved the deal, only one question remains: Just how much of a train wreck will the Marsh/JLT merger be? Before answering that question, however, it is first worth understanding the circumstances that led to the deal happening in the first place. Let’s clear one thing up straight away. The one thing this deal wasn’t, despite all carefully crafted protestations to the contrary, was strategic. JLT did not in my view sell because it believes that the combined business will be a better or faster-growing one or because it had run out of road -- the 5% organic revenue growth, 25% growth in underlying trading profit and 18% improvement in the underlying trading margin contained in the last set of results would certainly suggest otherwise! Rather, JLT's hand was forced because Jardine Matheson decided it wanted out, and management decided that a sale to Marsh was the least worst option. At least, that’s the rather convenient line that I'm sure is being peddled internally. Of course, there would be an element of truth to this: The changing of the senior guard at JM over recent years -- in particular, the untimely death of Rodney Leach -- would inevitably lead to an internal re-evaluation of the wisdom of JM holding such a significant investment outside of its core Asian markets. But there were other factors at play, too. First, JLT’s bold U.S. retail strategy was unlikely to meet the commitments made when its plans were announced in September 2014 ”that the business will start to contribute to profits in 2018 and then generate an accelerated return thereafter,” despite the huge progress that had been made. Selling out now avoided some very awkward questions from increasingly impatient investors. Second, JLT had signally failed to put in place a credible CEO succession strategy, not helped by a sense that anyone else would hold the group together – which would have no doubt played a major role in JM’s own thinking and willingness to fund staff retentions to the tune of £50 million in its anxiety to get the deal away. Louis XV once said, “Après moi, le deluge.” A sale solved a problem that JLT's board had been unable or unwilling to tackle for years. From Marsh’s perspective, it is also hard to see the deal as in any way strategic, unless getting bigger is a strategy -- which in the case of Marsh of course it may well be! Combining the two businesses will not magically boost Marsh’s organic growth rate – if anything 2+2 here may well equal 3, at best. It is also hard to believe that the merged business will allow Marsh – lest we forget, already the world’s largest broker - to access markets or territories that were somehow previously closed to it. In fact, the biggest factor here -- beyond CEO ego, which is probably the single most under-appreciated factor in all large scale M&A -- was almost certainly the fear that, if Marsh didn’t buy JLT, Aon would. A fear almost certainly shared, by the way, by the JLT management team, who have demonized Aon for years. The opportunity for Marsh to put clear blue water between itself and its nearest competitor, could not be missed, even if the result is something of a Frankenstein creation. Aon’s aborted pursuit of Willis shows that these things matter, and no doubt that deal will also happen at some stage. Coming back then to my original question around the prospects for the merger: Perhaps the best way to answer that is by using the litmus test that JLT has long claimed to use to assess all major decisions, namely the desire to balance the interests of “our four key stakeholders: our clients, our colleagues, our trading partners and our shareholders.” From a JLT shareholder perspective, this was clearly a stunning deal – all credit to JLT's management team for bending Marsh so far over the barrel that they must almost have been touching their toes on the other side. If anything proves Marsh’s white-knuckled determination not to let JLT slip through their fingers into Aon’s embrace, it was the price they paid. Given that the JLT’s board’s primary responsibility is a fiduciary one, who can blame them or have any complaints? What this means for Marsh’s shareholders though is another matter. From a client perspective, it is hard to see the deal as anything other than negative. These were already two very good businesses – putting them together may fill in some gaps here and there for Marsh (e.g. LatAm, Asian EB, Australian public sector) and bring some extra capabilities to JLT (e.g. analytics and engineering), but it doesn’t radically improve the overall customer proposition. In fact, it may have the exact opposite effect for many customers as, in a market already dominated by three silverbacks, the loss of the one challenger willing and able to upset the natural order of things will be keenly felt. Inevitably, this will lead to client losses, particularly from some of the larger accounts, who will not be willing to put their eggs in one basket or in the same basket as one of their major competitors. See also: Distribution: About To Get Personal   What, for example, are the prospects for JLT’s U.S. wholesale business, which had previously managed to convince its producing brokers that JLT's U.S. specialty-focused play didn’t really compete with them but may now find that argument ringing somewhat hollow! What is the outlook for JLT’s hard-won U.S. specialty business, which has been largely built off the back of its ability to position itself as radically different from the big three? What is the future for JLT Re, whose strong march over the past few years has been fueled by its clients' desire to diversify their placement and its team’s ability to bring a fresh perspective? Don’t also forget that much of JLT’s success has come from winning share from its major competitors, including Marsh. The idea that these same clients will allow themselves to be tamely shepherded into the Marsh fold is wishful thinking. At best, they might tender the business -- but, with JLT out of the way, Aon and Willis will now be as likely to win the business as the enlarged Marsh is to retain it. From a trading partner, or insurer, perspective, the deal is nothing short of disastrous. The forced sale of JLT’s market-leading aviation business to AJG by the E.U., at what seems to be a knockdown value for the best franchise in the market, probably deals with the biggest area of market concentration but doesn’t solve the bigger issue. I don’t know what Marsh/JLT’s combined share of Lloyds’ is, for example, but I would have thought it could be 30% to 40%. In some classes, a lot more. It will be the same picture elsewhere. That is a big problem, albeit one of the market’s own making, as it has consistently rewarded increased placement scale with better commissions, thereby slowly strangling itself to death. The growth of JLT has been at least partly due to the markets deliberately nurturing it as a counter-point to the dominance of the big three and offering it terms that allowed it to compete on something approaching a level playing field. Of course, some of the larger markets will be seeing this as an opportunity to grab an even bigger slice of the combined book. But the prospects for many of the smaller markets, which JLT had supported by eschewing the programmatic placement of its larger competitors and distributing risk far more widely across the market, are bleak. Which brings us finally to people. And this is where the harsh reality of the deal really hits home. Job reductions of 2% to 5% of the combined workforce of 75,000 are planned. That is 3,500 people, with families and mortgages and careers, effectively funding the bulk of the short-term deal benefits. And whatever has been said about selecting the best of breed, etc., everyone knows where the brunt of these job losses will fall. In the words of Sen. William L Macey – “to the victor belong the spoils.” Hard to see who the winners are here, apart from those cashing in their share options and heading for the race track. What then are the prospects for Marsh’s own shareholders? Well, there are some positives to cling to. There will be some geographic complementarities in Asia, Australia and LatAm, where JLT is strong. JLT’s fantastic offshore operation in India also provides a template for Marsh to replicate on a far larger scale, creating a huge opportunity to drive cost and operational efficiency through the business. The cost synergies, as already mentioned, will be significant – I would guess that the stated target of £250 million will be comfortably beaten - Marsh has been around the block enough times to know to under-promise and over-deliver in this area. And from a revenue perspective, there is a big opportunity to re-engineer JLT’s book and take advantage of Marsh’s more aggressive approach to squeezing insurers for enhanced commissions, work-transfer fees, consultancy arrangements, re-insurance placements and all the other weapons in the broker’s arsenal of dark arts. The only problem, of course, is that this is all one-off. Extracting the cost synergies and re-engineering the book will significantly improve short-term profits. But it won't deliver the long term organic revenue growth that will be required to justify the nose-bleed multiple that Marsh has paid. Although of course, by the time anyone runs the actual numbers, it will probably be someone else's problem to deal with! The real question, therefore, is whether the profit improvement will offset the unavoidable attrition that will result from the combination of the two businesses. Attrition born partly by clients voting with their feet, for the reasons already set out above, but more out of the collateral damage caused by the inevitable clash between the two business’ cultures. It is hard to overstate just how big an issue this is likely to be. JLT was a disruptor. It deliberately positioned itself (not always very accurately!) as the nimble, entrepreneurial, innovation-led counterpoint to Marsh, Aon and Willis’ slow, monolithic and commoditized approach. In a market drowning in a sea of sameness, JLT was able to articulate a distinctive message with real cut-through that was hugely successful in attracting some of the best people in the market from the big three, by making them feel special and part of something different and better. It was almost tribal – you were either lucky enough to be invited to be part of JLT, or you were against them. Whatever the cold economic logic of the circumstances that led to JLT selling out, many will always view this decision as an unforgivable betrayal of trust, such was the power of the "cult" that JLT had created. It is patently nonsensical to now expect these same people – who in choosing to work at JLT had in most cases consciously rejected the opportunity to work at one of the big three to benefit from JLT’s culture and more delegated approach to management and placement - to accept life under Marsh’s command-and-control management style. It makes you wonder whether Marsh really understand what they have bought or the challenge they will face in hanging onto it. The story I have heard (which I have no way of verifying) is that the deal was struck in little over a week – if true, I’ve spent longer choosing wallpaper! The oddity, of course, is that if there was one real strategic opportunity from this deal it would be JLT injecting some of its entrepreneurial DNA into the Marsh culture and giving it some of JLT’s street-fighting swagger. I'd love for that to happen. But history tells you that it is the one thing that is most likely to be lost. See also: Insurtech Is Ignoring 2/3 of Opportunity   Whatever retentions are put in place – and a staggering £75 milliion has been earmarked for this purpose up until the deal completes -- the best people will surely leave, as they always do. And there will be no shortage of people looking to offer them a home, or private-equity companies willing to back the right management teams. If I had to make a prediction, I would say that Asia, Australia, U.K. mid-market insurance broking and EB will be pretty stable. But JLT’s European network will fragment, as I doubt any of them will take the Marsh shilling. JLT’s LatAm minority interests will sell out to Marsh, providing some short-term stability, but good luck enforcing a restrictive covenant in Peru or Chile when their earn-out comes to an end in three or four years’ time! JLT Re will, you would think, given the over-concentration of the broker market, have largely re-constituted itself somewhere else within a few years. JLT’s London market wholesale and specialty business will fragment, attracted either to specialist competitors or to one of the various PE-backed start-ups that are circling JLT's carcass. JLT U.S. will also fragment as the team disperses, whether together or across the market, bringing to an end one of the most impressive market entry initiatives in recent memory. How much business could be lost? Your guess is as good as mine, but if I had to speculate I would say 30%, maybe even 40% in some areas over the next few years, as people leave and clients move. But here’s the best part: The Marsh shareholders may well not even care! When you lose the revenue, you lose the associated costs, as well, and many of these brokers are very well paid indeed. The combined impact of the cost savings and the portfolio re-engineering, plus the undeniable benefits of scale in today’s market, may well mean that Marsh can afford to take this level of revenue loss and still deliver a good return to its shareholders, having in the process also taken out an increasingly annoying thorn in their side. The big winners here – apart from the headhunters who must already have their new Porsches on order and the deal advisers pocketing hundreds of millions of dollars of fees - will almost certainly be the next tier of brokers, who stand to hoover up talent and business in the biggest feeding frenzy the market has seen for a long time. In particular, Hyperion and AJ Gallagher would seem to be well-positioned as the natural successors to JLT’s crown, with a growing global footprint and a proposition (at times more aspirational than actual) focused around specialty and agility, that many within JLT will find reassuringly familiar and attractive. I would also have thought that some of the bolder U.S. brokers such as Acrisure, Alliant or Assured Partners, looking to grow outside of their domestic markets, may well also see this as an unprecedented opportunity to build an international bridgehead. Overall, though, it is hard not to feel sad as another great London market name bites the dust. JLT’s shareholders are undeniably richer, and maybe in the modern world that's all that matters - what choice did they really have at the end of the day? But JLT’s clients, colleagues, trading partners and the market at large will be a lot poorer for its passing. Couldn't a BlackRock or a KKR have taken JM's stake off its hands and .... we will unfortunately never know. But perhaps this isn’t the end of the JLT story. Some phoenixes will almost certainly rise from the ashes of this deal. This article originally appeared here.

How to Innovate With an Agency Partner

The key to true transformation and innovation isn’t just about investing in new technology. It’s about investing in new mindsets.

||||
The term post-digital was coined by a tech journalist named Russell Davies in 2009 to describe a new paradigm he saw on the horizon. His use of the term in no way suggested that digital was to become obsolete or a thing of the past, rather that digital was becoming so commonplace, so deeply entrenched in how we operate in the world as humans, that to even differentiate something as such would soon be futile. To be post-digital, he explained, would have nothing to do with not being digital, but the opposite; to be post-digital would mean to be totally, completely and fundamentally digital. Digital to the core. In the words of architectural designer, professor and writer Adam Fure, the post-digital “is deeply digital.” Fast-forward 10 years, and whether or not we have arrived in the post-digital age Davies described is still up for debate. What is undeniable is that in the 10 years since Davies coined the term we have only become more deeply entrenched in all that is digital, so much so that it now seems easier to point out industries that have not gone digital than those that have. For the most part, the industries that are still lagging behind (think healthcare, insurance, financial services) have all had digital transformation agendas in play for years, yet many are still struggling to digitize even the most basic functions of their businesses and tend to confuse the use of technology that has been around for years (i.e., chatbots) with innovation, all the while being left in the dust by industries that either grew up digital or went digital years ago. These later industries don’t have digital transformation agendas; they are already digital. Instead they are focused on true innovation. While acknowledging that digital technology will inevitably play a role in the future, companies doing true innovation aren’t focused on technology as an end in and of itself, but are instead embracing and experimenting with different ways of thinking, learning and working that breed creativity, out-of-the box thinking, and ultimately new transformational ideas. Whether you are just getting started with innovation or are further along on your journey, there are a few key things we encourage our clients to keep in mind when partnering with an agency: Stop buying deliverables. Invest in mindsets. “…we must not be seduced by the artifacts of this process” - Dwight D. Eisenhower Clients often come to agencies frustrated by experiences they have had with other agencies; they are tired of paying high dollar and getting "nothing" actionable. All the while they are up to their old tricks, requiring pages and pages of documentation and slide after slide of deliverables that show the work happened. From my experience, slide decks and documentation are never evidence that innovation or transformation has taken place and are seldom a playbook for how to achieve it. Evidence of innovation and transformation can be best measured by looking at you (and your team) and how mindsets have shifted over the course of the engagement. If the engagement has been successful, people think, work and operate differently. In short, you are the deliverable. Everything else is just office decoration. See also: 3 Steps to Succeed at Open Innovation   My advice. Stop asking your agency partners simply what you are going to get or what they are going to help you build. If this is what you expect out of a successful engagement, you are looking for and buying the wrong thing. That website redesign isn’t going to change much of anything, at least not anything sustainable or real if you haven’t actually learned anything. Your mindset hasn’t changed. Instead of expecting your agency partner to leave you with cool new product ideas, expect the partner to teach you how to think like they do. Any skilled agency can create deliverable after deliverable, but if you are exactly the same person you were before the project, you still won't be able to do anything with the deliverables. Your mental models will still be the same. You will be the same. Your team will be the same. Left alone, things will devolve into what you already know, and what your team knows. Going through a design-thinking process is about living in the messy, no clear lines/phases between things. If you haven’t come to terms with that ambiguity, you haven’t really experienced what it feels like to work in this very different way -- let alone explain it to others, no matter how clear that project plan is. It’s not just different outputs, or a different process. It’s actually a different way of thinking. You should be buying mindsets and outcomes, not things. Think of it as Marie Kondo for your brain. For this to happen, you’ll need to let go of control, of rigid lines, of business as usual. For innovation to work at scale, you have to stop evaluating the in-between against a rigid framework (i.e., the three-year plan, the requirements document) and start measuring outcomes. Your success or failure is dependent on that truly incongruous detail. Can you imagine if Starbucks had a three-year plan that they refused to move away from? Instead, they are constantly responding to and evolving the customer experience. They experiment, they test, and what works gets rolled out and what doesn’t gets designed out. They don’t just innovate at the product level, they innovate at all levels of the business from manufacturing down to the drink straw. That’s why they were named the most innovative company by Fast Company in 2018. Innovative companies have innovative cultures, not just innovative people. While transforming mindsets should be the goal of an agency partnership, changing one person or even three isn’t enough. Innovation is about investing in new mindsets to change how the whole organization thinks. Much like Starbucks’ end-to-end approach or Microsoft’s famous shift from a know-it-all culture to a learn-it-all culture; the responsibility of innovation is on everyone at every level of the business. Toyota is famous for making every single person on the workforce focus on continued improvement. That’s what it takes. Success in innovation is about respecting and empowering everyone, the model for a true learning organization. No matter how "innovative" you or your immediate team is, the larger business you reside within is probably still operating very traditionally (unless you are one of the lucky few). The same poorly structured incentives, the same operational structures, the same LOBs, the same territorial blindness and the same power politics – all of which are the death knell to innovation. Innovation today requires much more than a surface remodel, it requires a new business model. It also starts at the top-down. That grassroots stuff, it's cute, but it rarely works and requires an incubation period way longer than most businesses have the patience for. You can't just snatch all the innovative thinkers and plop them onto a new "innovation lab" and assume the barriers are gone. All you've done is just create a siloed vacuum of a bunch of smart people who "get it." The problem is that a large percentage of them have only an academic understanding of what an innovation process actually “feels” like. They have to be set up for success at the highest levels of the business. In the end, those corporate innovation labs almost always end up hiring an agency to “fix things” or to get “results faster.” Worse is when they want to learn new ways of working, but don’t make time for it and aren’t around for the actual learning part. Innovation is more than a process. It’s an experience. “To achieve breakthrough results, you can't just change what you do.
 You first must change how you think… New thinking leads to new doing.” - Mark Bonchek True innovation goes beyond practices and processes. If you haven’t experienced it, it’s hard to explain it. There’s a deep understanding that comes from real, in-the-trenches problem-solving. Innovation is all about learning a new way of thinking and working. It’s about developing a nimble mindset that allows you to move through an ambiguous problem-solving exercise, constantly revisiting your original thesis, to solve wicked problems. This kind of thinking requires an honesty and a flexibility that is contradictory to most command/control structures. All that success of those Silicon Valley people isn’t because they are “Agile.” It's because they solve problems differently; as a collective, that’s a powerful thing. You have to want to “think differently,” not just want to be successful. And you can’t learn to think differently, unless you actually experience what it’s like. See also: Era of Insurance Innovation Is Upon Us   And this experience is something everyone in an organization needs to get a taste of. Here at Cake & Arrow, we run workshops with our clients where the whole output is shifting how a department thinks, giving them hands-on experience with the methods to help them sustain that shift. To do so, we aren’t just working with the “creatives” or the “innovative thinkers” within that department. We are working across the organization, with people from marketing, from operations, from IT, etc. In doing so, we help them move from a service mindset (cog-thinkers are quickly being replaced in today’s world of automation), to one where they form strategic relationships with their partners across the business. These new patterns of interaction are what businesses need to solve complex problems today. With the proper support, fundamental change like this can cascade throughout an organization, providing longer-term, sustainable transformation. Every now and again, in partnership with amazing Agency X, companies manage to bring that consumer optimized digital experience to market. While that’s all fine and good, it’s just the beginning. Without the right minds on hand to actually continue to evolve and sustain that product long-term, they will eventually end back right where they began. They relapse, and two years later, they go right back to the agency to redesign and relaunch the experience. This is because they bought a thing (a website, an app, a sparkly powerpoint) and not a mindset. Innovation is about a new way of working; it’s a different way of problem-solving that requires continuous nurturing to expand and grow. When it comes time for your next project, stop buying deliverables, and instead invest in mindsets. Only then will you have the right tools to solve for what’s next and finally move beyond your digital transformation agenda, because now you have actually transformed, and digital was only part of the story. This article originally appeared here.

Emily Smith

Profile picture for user EmilySmith

Emily Smith

Emily Smith is the senior manager of communication and marketing at Cake & Arrow, a customer experience agency providing end-to-end digital products and services that help insurance companies redefine customer experience.

Advising With AI: A New Approach

If AI can substantiate what a business adviser recommends, if that recommendation is brief yet bold, an insurer can succeed.

If actuaries are the elite of the insurance industry, members of a licensed class whose workaday language is intelligible to few but influential to many, then business advisers are the interpreters of a separate yet equally important language: data. More to the point, business advisers are an independent class—hence their advisory role—in which they do much more than translate data into reports. They use artificial intelligence (AI) to find intelligence worth analyzing. They use intelligence to advance wisdom, because it takes skill to convert ones and zeros into a message that is as concise as it is compelling; it takes a different class of advisers to actualize a future that is close but hard to see; it takes verbal facility and visual acuity to present the future—to make the future present—for the insurance industry.

AI is a tool to accelerate the future. That future depends on business advisers who can explain why what seems possible is not only probable but inevitable: that AI answers the needs of insurers. Making the answers accessible—proving the answers are right—is an issue of talent, not technology. According to Nick Chini, managing partner of Bainbridge, AI augments the models actuaries develop and business advisers deliver. Which is to say actuaries theorize scenarios—they quantify what may happen—while business advisers qualify how things will likely happen; how a business adviser infers what will happen; how the inferences a business adviser draws are the result of his fluency in data; how his education is more extensive, his expertise more expansive, his experience more exhaustive than that of a typical adviser. He is atypical, in a good way, because he represents the future of business advisory services. He may have a doctorate in linguistics or degrees in finance and computer science. He may be a former professor or a career academic. To paraphrase Chini, what matters most is a business adviser’s ability to advise: to add value by abandoning generalities—to stop generalizing, period—and offer specifics about the future direction of the insurance industry and the course insurers should follow. In this situation, AI acts like a compass. It points the way, telling a person where to go without revealing how or when that person should start his journey. See also: What to Look for in an AI Partner For an insurer to begin that journey, he must know the advice he receives is right. To advise, then, is to communicate—to communicate with clarity and conviction—so an insurer has no reason not to do the right thing, so an insurer believes in the rightness of his decision, so an insurer knows he is right. If business advisers can further what is right, to get insurers to more easily and expeditiously do the right thing before challenges arise, the insurance industry will benefit as a whole. If AI can substantiate what a business adviser recommends, if that recommendation is brief yet bold, if that business adviser can prove his recommendation is right, an insurer can succeed. Let us welcome the chance to read—and endorse—these recommendations.

Insurance Has a 'Triple-Legacy' Problem

Insurance is stuck with legacy technology, built for legacy processes, operated by a legacy mindset. Each problem must be corrected.

I talk to a LOT of executives on a weekly basis from all parts of the insurance industry:
  • Senior executives at the reinsurer level
  • C-level executives at the insurer level
  • C-level executives at the intermediary level
  • Regulators
  • Reporters/writers/bloggers
  • Wholesalers
  • Advisers
  • Administrators
  • Other insurtechs
  • Literally anyone who will talk about the sexy subject of Insurance!
These conversations give me an extremely interesting perspective about not only the current state of the insurance industry, but also about where we are heading. A few themes consistently emerge during these conversations: Machine learning, AI, the role of advisers in the future and the most prevalent topic "disruption." Let me start by offering a perspective: "No one needs to be disrupted unless they choose to be." Innovation and technology at large are about inclusion and raising the bar of a particular industry. Innovation DOES NOT mean people have to LOSE for other people to WIN. There is a WIN-WIN-WIN; we just have to be motivated to find it. See also: Is Insurance Really Ripe for Disruption?   We, in this part of the world, are lucky enough to truly have equal opportunity. Technology democratizes our ability to compete. WE determine what is possible, who we choose to work with, which problems we solve, etc. To turn around as an industry and say "we are being disrupted" is a completely inaccurate way of looking at the future and exceptionally unfair to those individuals/companies who are looking to create meaningful change. Here are a few thoughts: 1) Advisers will be around in the next generation of financial advice (and the one after that). Their role will remain the same (helping people feel better about the financial decisions they make), but how they do their job will fundamentally shift. As we streamline parts of the industry that are siloed, cumbersome and antiquated, we will allow for advisers to re-invest their time into what matters: the customers they serve and their own development. Technology will allow people to UNLOCK their human potential and do what they love. 2) Technologies such as machine learning will AUGMENT, not replace, the role of amazing advisers. Many believe that as technology and automation come into the insurance space, fewer advisers will be needed. The contrarian view is that if people are able to focus on WHY they love the industry (helping people!) and we automate the functions they dislike (administration, compliance, etc.), more people will be drawn to our industry. Insurance (and all financial services) is about people, not paperwork. This may actually INCREASE the number of advisers. That is my hope. 3) You simply cannot solve a systemic problem with technology alone and expect everything else to take care of itself. Technological innovations will make today's systems better, but they will inevitably become legacy in the future. This is the nature of technology. It is a moment in time, until the next moment. What will be everlasting is the MINDSET that executives take to navigate the future. There is NO SILVER BULLET. This is about systemic change. We must THINK DIFFERENT and forget some of the preconceived notions that stifle our ability to effect change. Currently, our industry is being hampered by a "triple-legacy" problem: Legacy technology, built for legacy processes, operated by a legacy mindset. Addressing each part of the "triple-legacy" challenge is required to ensure the future success of our industry. See also: Insurtech vs. Legacy Insurance Carriers   At Finaeo, we are dedicated to solving many of the problems that are holding back our industry and are starting our journey by helping the hundreds of thousands of advisers around the world get their time back by streamlining processes and providing them with elegant, intelligent and thoughtful technology to manage their day. We are actively seeking partnerships with insurers, reinsurers and intermediaries who also believe that advisers and their customers (the policy holders) DESERVE a better experience. Want to know more about why you should join Finaeo? Check us out here. This article first appeared here.

Aly Dhalla

Profile picture for user AlyDhalla

Aly Dhalla

Aly Dhalla is the CEO/co-founder of Finaeo, a venture-backed insurtech startup that is reshaping insurance distribution to help independent advisers thrive in a digital era.

Fuzzy Language Limits Cyber Adoption

Insurers must solve the language problem to remain relevant, providing broad coverage for the cyber peril on the insurance lines it affects.

In a late February blog post, Insurance Thought Leadership.com editor Paul Carroll urged the industry to “watch your (our) language!” and to “get our talk – our vocabulary – straight.” While he focused on examples of inaccurate word choice impairing customer perception of insurers and their products, his point is equally relevant when talking about the cyber insurance segment. Nomenclature matters for insurers and their customers alike. Two industry terms that are being used inconsistently and are often conflated in the cyber markets are “silent cyber” and “cyber as a peril.” “Silent cyber” relates to an insurance policy that does not affirmatively include or exclude coverage for losses arising out of the cyber peril. Such a contract causes uncertainty. For example, if a hacker manipulates connected devices or systems and causes property damage, will your property policy cover the losses? If a hacker or a human IT error brings down a power grid or causes a dam to flood, would a business interruption policy respond? Historically, insurers did not price for these events in their rates, as these events weren’t relevant when the policy language was written. Property policies also often don’t price for, nor address, extra-terrestrial invasions in the language; while there is a non-zero possibility of a Martian landing, we are reasonably comfortable that it is okay to be silent there. But loss events from the cyber peril that affect a variety of property and casualty policies are becoming more apparent every day. A few examples include Stuxnet, the 2015 Ukraine power outage, Saudi Aramco’s 2012 hack and NotPetya. Understandably, more insurers are reviewing their various P&C contracts to ensure contract certainty on policies with potential exposure to cyber events. Forward-thinking insurers are inventorying policies to determine where silent exposures exist and amending policy language to affirmatively exclude (using language like CL380) or affirmatively cover (and price for) losses from the cyber peril. While competitive pressures can make this exercise a real challenge, insurers that get this right will have better certainty on exposures, manage capital more efficiently and be better-equipped to innovate on products that cover the various losses that can emanate from the cyber peril. See also: Cybersecurity for the Insurance Industry   Insurance policies are contracts of adhesion drafted by the insurer, so ambiguity will often be interpreted in favor of the insured by the courts, due to the asymmetric influence the insurer has over the language. Next, consider that a property policy can have limits as high as $1 billion, while the typical affirmative cyber insurance policy limit is below $25 million. We can see why it is a benefit to the market for insurers to have certainty (by ending the silence) on large property policy limits with potentially catastrophic exposure. Brokers and risk managers need to ensure that the organizations they represent have adequate coverage (in terms of coverage scope and policy limits). Brokers are expanding physical coverage on traditional cyber policies as well as adding cyber, as a covered peril, on adjacent P&C policies. Reinsurance carriers and brokers are also making new reinsurance products. These efforts will benefit the market at large in the form of more robust policies that are underwritten and priced with a conscientious evaluation of the covered perils. Insurers are moving swiftly to offer cyber coverage. About 150 insurance companies have booked premiums for such insurance, up from about 100 last year. Only about 50 carriers offered cyber coverage four years ago. This activity all matters because cyber is a virtual and existential business risk that, for the most part, insurance products don’t adequately address yet. The industry needs to solve this problem to remain relevant by providing broad coverage and limits for the cyber peril on the variety of insurance lines it affects. Teams across insurers are collaborating to provide appropriate line-specific and cyber-specific expertise to approach the problem. And greater attention from regulatory institutions and rating agencies is likely ahead in terms of inserting stronger language in regulations to address cyber risks and holding conversations with insurers. Lloyd’s of London, which its CEO says has a 20% to 25% share of the multibillion-dollar cyber insurance market, is requiring its syndicates to report quarterly on their cyber exposures. Companies, however, have been slow in adopting cyber insurance due to its complexity and limited coverage. Calling them “cyber” policies implies coverage for a wider variety of exposures than the policy actually contemplates. Again, nomenclature matters. While the majority of companies subject to data breach regulations, like large financial institutions, healthcare, retail and hospitality companies, purchase coverage, total market penetration is only about 15% in the U.S., with small business being the lowest adopters. Outside of the obvious statement that no one is immune to a costly data breach, there are several reasons for insurers to engage with cyber insurance and promote the coverage to businesses. The significant factors include:
  • The explosion in the number of devices connected to the internet, which is expected to reach 200 billion by 2020 -- from just 2 billion in 2006, International Data Corp. estimates. Each of those devices is a point of entry for a cybercrook, and every employee interaction with the internet poses a potential threat of a breach.
  • Automation of business operations and processes that makes critical elements of a company’s infrastructure and systems vulnerable to cyberattacks and should be part of any risk management plan. Manipulation of this connected infrastructure could cause physical effects arising out of the cyber peril.
  • Rapid development of more sophisticated cyberattacks, including ransomware, supply chain attacks and formjacking – inserting code into retailers’ websites –making it tough to respond with countering technology. Ransomware payouts alone in 2017 reached $5 billion, according to Cybersecurity Ventures.
Like insurers, companies and their brokers should be inventorying their own policies to evaluate the scope of coverage on their “cyber” policies, as well as evaluating coverage for the cyber peril on all of their P&C policies to ensure there are no surprises at the time of a claim. See also: Innovation — or Just Innovative Thinking?   As technology continues to advance, insurers need to find ways to adapt quickly and create innovative products that properly protect their customers from the exposures that are relevant today. Collaboration is needed within insurance companies and across all parties in the insurance value chain to properly protect insurers and insureds from this existential business risk.

Philip Rosace

Profile picture for user PhilipRosace

Philip Rosace

Philip Rosace is a cyber risk authority, inventor and patent holder of cyber risk quantification systems at Guidewire Software.

How to 'Own the Anchor' in Settlements

Data can let companies resist plaintiff attorneys' attempts to "anchor" negotiations with a high, initial demand.

||
Anchoring occurs in third-party settlement negotiations when one side throws out a number in an effort to influence—or “anchor”—the way the opposing party values a claim. Our experience shows that anchoring directly affects the settlement value of injury claims, so it’s critically important for insurance adjusters to “own the anchor” in negotiations. What Drives Anchoring? Anchors come in many varieties, but we find that medical costs are, by far, the greatest and most troublesome. It’s natural to assume that medical costs are an important factor to a settlement and a possible starting point in a negotiation. We are aware that some medical providers grossly overcharge for services and run up their bills. Most of us also know full well that medical costs are only one factor to consider and shouldn’t be the sole determinant of claim value. However, what we don’t realize is that medical bills have an undetectable and powerful impact on the ultimate settlement amount. While we believe we treat medical costs as separate and distinct from other elements in a claim, our research demonstrates that medical costs have an oversized influence on the pain and suffering portion of the claim. The Proof Chart 1 To understand how medical costs influence, or anchor, the settlement, we looked at the relationship of first demands as a multiple of submitted medical costs (e.g., a settlement demand of $10,000 with $2,000 in submitted medical bills represents a 5:1 multiple), with final settlements as a multiple of medical costs (a $6,000 settlement demand with $2,000 in medical bills has a 3:1 ratio). Chart 1 illustrates how medical costs dramatically affect the amount we are willing to settle for. It’d be hard to show any higher correlation. The higher the demand by a plaintiff attorney vis-à-vis the medical bills, the higher the settlement the attorney was able to negotiate. The plaintiff attorney very effectively “owns the anchor” by anchoring the overall value of the claim in the medical costs. See also: Surprise Medical Bills: Just a Distraction   Why Medical Bills Influence Medical bills have such an outsized impact on settlement values because they are, often, the only tangible item in discussions. Venue, pain and suffering, a claimant’s age, etc., aren’t tangible. By contrast, the dollar amounts shown in medical bills are easy to reference, and we naturally latch on to them. Even when we seek to discredit the medical bills, we’re still referencing them, granting them relevance, even authority. They stubbornly remain the anchor. The Fact-Based Fix Chart 2 Thankfully, facts are stubborn, too, and work very effectively to unseat anchors! Plaintiff attorneys rely heavily (and successfully) on medical costs that they fail to examine to understand the underlying facts of their case. To exploit a claim’s facts, the approach demands a command and use of the facts of the injury and its treatment. As Chart 2 illustrates, the same adjusters who produced the results in Chart 1 were subsequently able to “own the anchor” through negotiation grounded in facts. Our experience shows that using fact-based analyses dislodges the anchor of the submitted medical bills. In fact, the bills showed a negative correlation on the ultimate settlement accepted by the plaintiff attorneys. The dispersion of results in Chart 2 shows that the tight relationship between submitted medical bills and settlements had ceased, with the anchor effectively displaced. This yields accurate settlement values grounded in facts. When arguing from facts, the adjuster calls the plaintiff attorney’s medical bill bluff and “owns the anchor.” Set Sail It's facts or nothing. So, how do we stake our negotiations in facts? The answer rests in shifting a case from its “raw data,” like medical bill costs, to substantive arguments. And we don't mean bill review services, which focus on explanations of benefits and reductions in charges according to a fee schedule. Their use of obscure medical codes and rules, which aren't widely understood or capably explained in negotiations, means that bill reviews diminish their authority and can't help us to unseat the anchor. Substantive arguments must be backed by evidence drawn directly from claim records, such as information from treatment notes, emergency room reports and observations from the medical world about how injuries are objectively documented and what justified medical care looks like. This approach will lower your ultimate settlement costs, with the wind in your sails. See also: How to Cut Litigation Costs for Claims   For example, in a soft tissue injury case, plaintiff lawyers frequently assert that their client struggles with everyday activities. Rather than being drawn into this assertion, adjusters must undertake a thorough review of the claimant’s medical records (including treating physician, chiropractor and physical therapist notes) to determine if evidence is presented that demonstrates actual physical impairment. This includes examining observations of the claimant’s range of motion, strength and other functional deficits. If we aren’t prepared with this information, it is not difficult for a plaintiff attorney to use these assertions in conjunction with anchoring to drive the doubt in one’s mind that drives up the offers we’re willing to make. Except to the extent the lawyer’s claim of severe injury can be tied directly to evidence in the medical records, there is no basis for the adjuster’s evaluation to be influenced by the lawyer’s assertion. Where objective medical data indicates little to no impairment, claims of daily distress don’t hold water. To dislodge the anchor cast by a plaintiff’s lawyer, adjusters must have organized medical records, a methodology (including the use of medical standards) to consistently review and interrogate the evidence from those records and the ability to produce a single, coherent assessment of the claim. We have found that the most effective use of evidence lies in the assessment of each component of a claim–-the pain and suffering of each injury, the legitimacy of claimed medical costs from each healthcare provider, the support for lost time from work. This is carried into negotiation, where an adjuster methodically presents his or her analysis of the claim and uses tactics for focusing the claim on facts. Training is a necessary element in this, but the dividends are significant.

Jim Kaiser

Profile picture for user JimKaiser

Jim Kaiser

Jim Kaiser is the CEO and founder of Casentric. Kaiser brings nearly 30 years of experience in the claims industry to Casentric.

The Race Is on for 'Post-Quantum Crypto'

We’re 10 to 15 years from the arrival of immensely powerful quantum computers; cryptography needs to be future-proofed starting now.

Y2Q. Years-to-quantum. We’re 10 to 15 years from the arrival of quantum computers capable of solving complex problems far beyond the capacity of classical computers to solve. PQC. Post-quantum-cryptography. Right now, the race is on to revamp classical encryption in preparation for the coming of quantum computers. Our smart homes, smart workplaces and smart transportation systems must be able to withstand the threat of quantum computers. Put another way, future-proofing encryption is crucial to avoiding chaos. Imagine waiting for a quantum computer or two to wreak havoc before companies commence a mad scramble to strengthen encryption that protects sensitive systems and data; the longer we wait, the bigger the threat gets. The tech security community gets this. One recent report estimates that the nascent market for PQC technology will climb from around $200 million today to $3.8 billion by 2028 as the quantum threat takes center stage. I had the chance to visit at RSA 2019 with Avesta Hojjati, head of research and development at DigiCert. The world’s leading provider of digital certificates is working alongside other leading companies, including Microsoft Research and ISARA, to gain endorsement from the National Institute of Standards for breakthrough PQC algorithms, including Microsoft’s “Picnic”and ISARA’s qTESLA. See also: Cybersecurity for the Insurance Industry   Hojjati outlined the challenge of perfecting an algorithm that can make classical computers resistant to quantum hacking — without requiring enterprises to rip-and-replace their classical encryption infrastructure. For a full drill down of our discussion, give a listen to the accompanying podcast. Below are excerpts edited for clarity and length. LW: What makes quantum computing so different than what we have today?

Hojjati: The main difference is that a classical computer is able to digest a single value (single bit) at a time,  either a zero or a one. But quantum computers are storing information in quantum bits or “qubit.” Quantum computers are able to digest 0, 1 and superposition state of both 0 and 1 to represent information. And that’s where their performance excels. Just how fast a quantum computer can perform is based on the number of qubits.  However, whenever you’re increasing the number of qubits, you introduce the possibility of error, so what you actually need is stable qubits. Another problem is that quantum computing produces a lot of heat. So the problems of errors and heat still need to be solved. LW: How close are we to a quantum computer than can break classical encryption? Hojatti: To break a 400-bit RSA key, you would need to have a 1,000-qubit quantum computer, and the closest one that I have seen today is Google’s, which has around 70 qubits. That’s not enough to break RSA at this point. That being said, we’re in a transition period, and we shouldn’t wait around for quantum computers to be available to transition to post-quantum crypto. LW: What’s the argument for doing this now? Hojjati: It takes some forward thinking from the customer side. Do you really want to wait for quantum computers to be available to change to post-quantum crypto? For example, are you willing to distribute 10,000 IoT sensors today and then pay the cost down the line when a quantum computer is there to break the algorithm? Or are you willing to push out hybrid (digital) certificates into those devices, at the time of production, knowing they’re going to be safe 20 or 30 or 40 years from now? LW: Can you explain “hybrid” certificate? Hojjati: A hybrid solution is a digital certificate that features a classical crypto algorithm, like RSA or ECC, alongside a post-quantum crypto algorithm — both at the same time. It’s a single certificate that, by itself, carries two algorithms, one that allows you to communicate securely today; and the other algorithm will be one that the NIST currently has under review. Picnic, for instance, was submitted by Microsoft Research and is one of the post-quantum crypto algorithms under NIST review; the other is qTESLA, which was submitted by ISARA Corp. A hybrid digital certificate provides the opportunity for customers to be able to see how a post-quantum crypto algorithm can work, without changing any of their infrastructure. See also: Global Trend Map No. 12: Cybersecurity   LW: So you take one big worry off the table as numerous other complexities of digital transformation fire off? Hojjati: Absolutely. This is one of the elements of security-by-design. When you’re designing a device, are you thinking about the threats that are going to happen tomorrow? Or are you considering the threats that are going to happen 10 or 20 years from now? Solving this problem is actually doable today, without changing any current infrastructure, and you can keep costs down, while keeping the security level as high as possible.

Byron Acohido

Profile picture for user byronacohido

Byron Acohido

Byron Acohido is a business journalist who has been writing about cybersecurity and privacy since 2004, and currently blogs at LastWatchdog.com.