Download

Improving chances of overcoming startup hurdles

sixthings

The bankruptcy filing of an insurtech startup has drawn some recent attention, so here is our two cents:

In the world of startups, the fact that Human Condition Safety filed for bankruptcy in March 2017 would generate a headline something like: "Venture-backed startup filed the first bankruptcy of the day," or perhaps "the first bankruptcy of the week." Ho hum. 

In an era of accelerated development times, rapid testing and iteration of products, thousands of very smart people launch with solid practices, just as Human Condition Safety did in 2014, when its founders thought they could use artificial intelligence and wearables to simulate work environments and help employees learn to be safer. These startups often find a name client, as the company did with Bechtel, and attract real money, as Human Condition Safety did when it raised $18 million at a valuation as high as $90 million. Then the founders discover that reality is much different from theory. 

This isn't a negative. All sorts of studies have found that more than 90% of startups fail. The failures are a feature, not a bug, for the innovation ecosystem. The failures don't show that the founders are stupid, bad people, etc. They just show that people are being appropriately ambitious about important problems with potentially huge payoffs.

The failure rate for startups does raise a question that is worth some attention: What do best practices tell us about picking viable early-stage companies? In other words, can we do better than the standard failure rate?

Stats and research completed through our Innovator’s Edge platform strongly suggest the answer is yes.

Innovator's Edge is populated by more than 40,000 early-stage firms in 175 countries, with 7% of the firms qualifying as insurtech. (The remaining 93% are entrepreneurs with the potential to redefine risks we take as a given today.) Among all startups in the platform, 2017 funding transactions totaled nearly $345 billion. Research on the platform shows that insurers don’t limit involvement to investing. Participation includes mentoring, adopting, promoting and buying, as has happened with WeGoLook, Snapsheet, AppBus, Pypestream, RiskBlock, RiskGenius, Jamii, etc. Any additional involvement by insurers improves success rates.

The platform also shows that insurance-based venture investors appear to place more faith in the founding teams longer, and therefore tend to avoid replacing founders. Nor are they investing in teams that are looking for a quick exit. Changing out founding teams with executives selected by investors is a tactic of last resort, which fits with research we’ve done that found that 70% to 90% of a decision to support an early-stage company should be based on the team. Early-stage firms tend to lack experience at scale, but that’s what insurance companies do well–allowing for the sort of partnership that can produce great success. 

Let this story of a bankruptcy that took place nearly a year ago serve as a reminder on the risk/reward equation at work with insurtechs. The risks are high, so firms will fail. Get comfortable with that reality. However, when a startup with a solid team and a decent idea finds a good fit with an insurance investor that has clarity about the job to be done, the rewards are spectacular. 

Have a great week.

Paul Carroll
Editor-in-Chief


Paul Carroll

Profile picture for user PaulCarroll

Paul Carroll

Paul Carroll is the editor-in-chief of Insurance Thought Leadership.

He is also co-author of A Brief History of a Perfect Future: Inventing the Future We Can Proudly Leave Our Kids by 2050 and Billion Dollar Lessons: What You Can Learn From the Most Inexcusable Business Failures of the Last 25 Years and the author of a best-seller on IBM, published in 1993.

Carroll spent 17 years at the Wall Street Journal as an editor and reporter; he was nominated twice for the Pulitzer Prize. He later was a finalist for a National Magazine Award.

How Amazon Could Disrupt Care (Part 1)

Although Berkshire and JPMorgan also bring lots of employees to the alliance, Amazon is key to thinking big, starting small and learning fast.

“The ballooning cost of healthcare acts as a hungry tapeworm on the American economy.” That’s how Warren Buffett framed the context as he, Jeff Bezos and Jamie Dimon announced the alliance of their firms, Berkshire Hathaway, Amazon and JPMorgan Chase, to address healthcare.

The problem is serious. Healthcare costs in the U.S. have been growing faster than inflation for more than three decades. There is little relief in sight. A Willis Towers Watson study found that U.S. employers expect their healthcare costs to increase by 5.5% in 2018, up from a 4.6% increase in 2017. The study projects an average national cost per employee of $12,850.

The three companies have a combined workforce of 1.2 million. Based on the Willis Towers Watson estimate, they could spend more than $15 billion on employee healthcare this year. But, what can the alliance do about it?

On that, Buffett was less clear: “Our group does not come to this problem with answers. But, we also do not accept it is inevitable."

The challenge is formidable. As the New York Times noted, employers have banded together before to address healthcare costs and failed to make much of a dent in spending. How will this effort be different?

See also: 10 Mistakes Amazon Must Avoid in Health  

If this alliance as simply another employer purchasing cooperative, it will probably have little effect. Neither 1.2 million employees nor $15 billion in spending is all that significant in a 300 million-person, $3.2 trillion U.S. healthcare market. The alliance might nudge the healthcare industry toward incrementally faster, better and cheaper innovations—but not much more.

If, however, the alliance thinks big and structures itself as a test bed for potentially transformative ideas, innovations and businesses, it could have a disruptive effect.

Amazon is the critical ingredient in this latter approach. Although all three companies bring employees and resources (both critical), only Amazon brings particularly relevant technological prowess and disruptive innovation experience. Amazon could think big by simply applying the standard operating principles and capabilities that it has perfected for retail—comprehensive data, personalization, price and quality transparency, operational excellence, consumer focus and high satisfaction—to healthcare.

It also has differentiated technologies like Alexa, mobile devices, cloud (AWS) and AI expertise. It could leverage its recent years of healthcare-specific exploration, such as those in cardiovascular healthdiabetes managementpharmaciespharmacy benefit managementdigital health and other healthcare research. It could use Whole Foods as a physical point of presence.

Amazon could then start small and learn fast. It could crunch the numbers and come up with large enough interesting employee segments for experimentation. For example, it might focus on improving quality and satisfaction for the sickest 1% to 2% of employees. It might focus on those with hypertension or diabetes. It might focus on helping those undergoing specific treatments, such as orthopedics or cancer. It might focus on preventing the rise of chronic diseases in those at most risk, such as those with prediabetes or uncontrolled hypertension. It might focus on narrow but high-impact issues, like price transparency or prescription adherence. Issues in privacy would have be addressed, but there are many opportunities to address well-known but as-yet-unsolved problems in healthcare.

By first focusing on the quality, satisfaction and cost for the alliance’s employees, Amazon could justify its efforts through increased employee productivity and satisfaction and reduced cost. Indeed, the alliance emphasized that it its effort was “free from profit-making incentives and constraints.”

See also: Whiff of Market-Based Healthcare Change? 

That doesn’t mean, however, that profits are not possible in the future. Amazon built its AWS cloud computing business by first solving an internal problem in a plug-compatible, low-cost and scalable manner, and then bringing it to the market.

That business-building approach would provide an additional incentive that goes beyond cost-cutting: a new business platform for Amazon, an enormous investment opportunity for Berkshire and (despite short-term consternation to existing clients) investment banking opportunities for JPMorgan.

In Parts 2 and 3 of this series, I will explore what an Amazon inspired transformation in health care might look like and how Amazon is well-positioned to make it happen.

Is Big Data a Sort of Voodoo Economics?

Some predictive models are “Rube Goldberg” constructs; the worst resemble “a bunch of monkeys heading up the Manhattan Project.”

Is charging consumers more for their insurance because they use a Hotmail email account or have a "non-English-sounding" name a valid application of predictive modeling, or does it constitute presumptive modeling and unfair discrimination? Does it matter if "big data" is riddled with bad data and bogus information as long as it improves insurer expense ratios? Is this the insurance industry's version of voodoo economics? It’s no secret that I’ve written things about the Holy Crusade known as insurtech that are critical or at least suggest caution in climbing aboard the hype and hyperbole bandwagon. Insurtech has been touted as the philosopher's stone with its ability to turn "lead" data into golden predictions. One component of this “movement” is big data, the miracle cure for perceived stagnant industry profits known as data analytics and predictive modeling. There is nothing new about the importance and value of data and its wiser big brother, information. Predictability, in the aggregate, is the cornerstone of industry stability and profitability. It’s the foundation of actuarial science. But, to be of value, the data must be credible, and the models that use it must be predictive by more than mere correlation. And, to be usable, the data and models must meet legal requirements by being risk-based and nondiscriminatory. That’s where one of my concerns lies. Just how valid and relevant is the data, and how is it being used? What prompted this article was a blurb in Shefi Ben-Hutta’s Coverager newsletter [emphasis added]: “Certain domain names are associated with more accidents than others. We use a variety of pieces of information to accurately produce a competitive price for our customers.” – Admiral Group in response to research by The Sun that found the insurer could charge users…extra on their car insurance, simply for using a Hotmail email account instead of a Gmail one.” This revelation came just days after The Sun ran an article accusing the U.K. insurer of charging drivers with non-English-sounding names as much as £900 extra for their insurance. I don’t know enough about insurance in the U.K. to opine about the potential discriminatory nature of jacking premiums on people whose names don’t sound “English,” but my guess is that U.S. state insurance departments likely would not look favorably on this as a rating factor. See also: Strategies to Master Massively Big Data   Historically in the U.S., P&C insurance rates have been largely based on factors that are easily ascertained and confirmed. For example, the “COPE” acronym (construction, occupancy, protection and exposure) incorporates most of the factors used in determining a property insurance rate. From the standpoint of the fire peril, frame construction is riskier than fire-resistive construction. A woodworker is riskier than an office. Having a fire hydrant 2,000 feet away from a building is riskier than one 200 feet away. It makes sense. It’s understandable. It’s provable. The risk inherent in these factors is demonstrable. The factors are understandable by consumers and business owners. It’s easy to identify what insureds can do to improve their risk profile and reduce their premiums. Advice can be given on how to construct a building, install protective systems, etc. to reduce risk and insurance costs. Traditional actuarial models are proven commodities, and state insurance regulators have the expertise and ability to evaluate the efficacy of rate changes. What these factors are not, in many cases, is inexpensive. Confirming this information may require a physical inspection. Some state laws require or compel such inspections. In my state, our valued policy law says that buildings must be inspected within 60 days of policy inception or the law is triggered and a carrier may have to pay policy face value for a total fire loss. Are the insurtech startups selling homeowners insurance even aware of this? It is understandable that insurers want to reduce any unnecessary underwriting expenses if there are acceptable alternatives. Doing so may improve profitability or make them more competitive by enabling premium reductions. This is where insurtech and technology in general can play a valuable role. Using reliable data on construction and size of buildings, building code inspection reports, satellite mapping for hydrant location and so forth can have an almost immediate impact on the carrier expense side and potentially the loss component. To a large extent, this is actually being done, but the search for something more (or less, if we’re talking about expenses) continues. Enter “big data” and predictive modeling, along with a horde of people who know absolutely nothing about the insurance industry but a lot about deluding gullible people with hip press releases. They tout the salvation of phone apps, AI bots and “black box” rating algorithms with 600 variables and factors. Factors such as whether someone, according to their Facebook page or other online source, bowls in a Wednesday night mixed league where (speaking from personal experience) the focus is more on beer consumption than bowling and how that might affect the risk of an auto accident. The $64,000 question is how reliable are these predictive model algorithms and how credible is the data they use? The author of an article titled “How Trustworthy Is Big Data?” claims that there is typically a lot less control and governance built into big data systems compared with traditional architectures: “Most organizations base their business decision-making on some form of data warehouse that contains carefully curated data. But big data systems typically involve raw, unprocessed data in some form of a data lake, where different data reduction, scrubbing and processing techniques are then applied as needed.” In other words, there may be little up-front vetting of the information because that takes time and costs money and, when acquired, there is no certainty that the data will ever be used. So, the approach may be to vet the data only when used, and, as the article suggests, that can be problematic. The article also addresses the ethics of acquiring information on individuals for what may be perceived as nefarious reasons (e.g., price optimization): “Just because something is now feasible doesn’t mean that it’s a good idea. If your customers would find it creepy to discover just how much you know about their activities, it’s probably a good indication that you shouldn’t be doing it.” Going back to The Sun’s Admiral reports, what impression would it make on Admiral’s customers if the insurer advertised, “Pay less if you have an English-sounding name!” Would any insurer advertise something they’re allegedly doing behind closed doors? It’s like the ethical decision criteria of, what would your mother think if she knew what you were about to do? The right to do something doesn’t mean that doing it is right. Does black-box algorithmic rating enable and potentially protect this practice? I mentioned at the outset of this article that the Admiral report prompted the article. What compelled the article was a recent personal experience when I received a $592 auto insurance invoice a little more than two months into my policy. The invoice attachments never really said why the carrier wanted additional premium, but a quick review indicated the reason. Our son moved out of the house three years ago, and we removed him from our insurance program, including his vehicle. He still uses the same agency (different insurer) that I’ve used since 1973 to insure his auto, condo and personal umbrella. Our insurer learned that his vehicle registration notice is still mailed to our address. With that information, they (i.e., their underwriting model) unilaterally concluded that he still must live here, so they added him back to our insurance program and made him the primary driver of one of our three autos (the most expensive one, of course). I’m not sure what they thought happened to his vehicle. But, of course, no one “thought” about anything. An algorithmic decision tree spit out a boiler-plated invoice. I’ve been with this carrier now for four years, loss-free, and paid them somewhere in the neighborhood of $20,000 in premiums, yet they could not invest 10 minutes of a clerical person’s time to make a phone call and confirm my son’s residency. Neither we nor our agent received any notice or inquiry prior to the invoice, but my agency CSR (who, I'm happy to report, is still an empathetic human) was able to quickly fix the problem. I have written about my personal experiences with a prior insurer involving credit scores. My homeowners premium was increased by $1,000 and, by law, I was advised that it was due to credit scoring. As it turned out, the credit reports of a Wilson couple in Colorado were used. Two years later, my homeowners premium was bumped $700 based on three “reason codes,” which I was able to prove were bogus, and the carrier rescinded the invoice. Now I’m being told that my current insurer’s information source tells them that my son has moved back home. I realize that these tales are anecdotal, but three instances in five years? How pervasive is this misinformation? Is this what “big data” brings to the table? Big, BAD data and voodoo presumptive (not predictive) modeling? Who really benefits from this? Anyone? One of the insurtech buzz words going around is “transparency.” What’s transparent about “black box” underwriting and rating? At a convention last year, I spoke at length to a data scientist who was formerly with IBM and is now an insurance industry consultant. Without naming names, he characterized some of the predictive models he has examined as “Rube Goldberg” constructs, with the worst ones resembling “a bunch of monkeys heading up the Manhattan Project.” See also: Big Data? How About Quality Data?   Another consultant expressed his concern about some data companies. An NAIC presentation he attended listed some parameters relative to data points being used by carriers. The presenter expressed confidence that carriers were disclosing all of their data points. He is convinced, however, that carriers are using 25% to 50% more data points than the NAIC seems to be aware of. He has written about the abuse of data that lacks an actuarial grounding in risk assessment, again, a requirement of some state laws. Among the many problems with “black box” rating is the fact that no one may be able to explain how a particular premium was derived. No one may be able to tell someone how to reduce their premium. Perhaps most important, regulators may be unable to determine if the methodology results in rates that are unfairly discriminatory or otherwise violate state laws that require that rates be risk-based. Presumably, future rate filings will simply be a giant electronic file stamped “Trust Me.” "Big data" might be beneficial to insurers from a cost, profitability and competitive standpoint, but it's not clear how or even if it will affect consumers in a positive way. All the benefits being touted by the data vendors and consultants accrue to insurers, not their customers. In at least one case, if you have a “non-English-sounding” name, the impact is adverse. The counter argument from the apostles of big data is that the majority of people will benefit. Of course, that was arguably the logic used when schools were segregated, but that doesn’t justify the practice. In the book “Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech,” the author points to an investigation of a correctional facility system that used proprietary algorithms to help decide bail terms, prison sentences and parole eligibility using various factors, some alleged to be discriminatory (e.g., arrest records of neighbors where the person lived). A Wall Street Journal article, “Google Has Chosen an Answer For You – It’s Often Wrong,” demonstrated how searches often indicated a bias or manipulation by whomever constructed the algorithms being used or by how the search parameters were entered by users. Errors in building replacement cost valuations are often blamed on incompetent or untrained data harvesters and users….Even when the data is presumed to be accurate, it can be used incorrectly. In 2016, I wrote an article for Independent Agent magazine titled, “The Six Worst Things to Happen to Insurance in the Past 50 Years.” No. 3 on my list was the growing obsession with data vs. people. When I write about these things, I know I run the risk of being characterized as the old man on his front porch yelling at the “disrupter” kids to get out of his yard, but I don’t think I’m a Luddite. I love and embrace technology. I had a PC before IBM did. I still have the first iPod model. My phone is full of nifty apps. My son is a data scientist in the healthcare industry. I get it. But technology is a tool, not a religion. Far too many people treat technological innovation as sacrosanct and infallible, and anyone who questions or challenges its legitimacy and righteousness is committing heresy. What’s next, a SnapChat invitation from an AI bot that says, “Welcome to the Insurance Matrix, Mr. Anderson”? Not yet, I hope.

Bill Wilson

Profile picture for user BillWilson

Bill Wilson

William C. Wilson, Jr., CPCU, ARM, AIM, AAM is the founder of Insurance Commentary.com. He retired in December 2016 from the Independent Insurance Agents & Brokers of America, where he served as associate vice president of education and research.

Top 10 Claims Trends That Will Affect 2018

Here are the top 10 reasons why 2017 should be considered the kickoff year for digital transformation in insurance claims.

2017 will be remembered as the landmark year for digital transformation of claims processing within the property/casualty insurance industry. After all, it was the year we saw the use of drones for home roof inspections, auto appraisals via photos, customer video chat and bots used in claims processing. While many of these solutions have been topics of discussion in the industry for quite some time, insurers began to implement them in earnest through new technology during 2017. Here’s a deeper look at what we think are the top 10 reasons why 2017 should be considered the kickoff year for digital transformation in insurance claims. Big data made a big splash While big data has been widely used within insurance underwriting for several years, in 2017 we saw multiple and varied big data claims applications emerge. For example, LexisNexis Risk Solutions rolled out Claims DataFill to a number of insurers that were seeking actionable data at first notice of loss (FNOL) to expedite the first report and collect critical data needed for claims adjudication. This solution accelerates the claims handling process. You can find many other examples of innovative big data applications within the insurance sector in this 2017 recap blog post, written by LexisNexis Risk Solutions CEO Bill Madison. The “touchless claims” vision became real As digital capabilities become more robust and customers increasingly demand digital in every aspect of their lives, fully automated claims processing is evolving into a strategic imperative. In a recap of our Future of Claims Study, you’ll see that the compelling vision of the “touchless” claim is turning into a reality. Insurers are accelerating their movement along the automated claims processing continuum to drive greater efficiency, increase profitability and, most importantly, deliver a better customer experience. Photo-based appraisal displaced in-person auto appraising Speaking of touchless claims, insurer Allstate demonstrated its commitment to a more virtual claims handling approach through the closure of its drive-in appraisal centers in favor of photo-based appraising, made possible through its QuickFoto mobile app claims option. The company also made additional investments in other digital technologies that support photo-based appraising, including their Virtual Assist solution for auto body shops. These investments are already reaping benefits for Allstate in terms of faster customer service and greater process efficiencies. See also: Disruptive Trends in Claims Cycle (Part 1) Drones became the new “insurance inspector” With four hurricanes striking the U.S. in 2017, drones were quickly put to work to allow insurers to accelerate roof and exterior inspections. A drone could also travel into flooded areas to give adjusters access even before the waters subsided. There were several media spots highlighting the use of drones in claims, including NBC’s Today show, where executives Glenn Shapiro of Allstate and Patrick Gee of Travelers were interviewed about the benefits of drones for expediting home claims. In 2017, we learned that using drones can shave several days off claims processing time, free human resources for more valuable and strategic tasks and save insurers money. Interactive features strengthened relationships with customers The digital world offers a plethora of tools that can help insurers improve every aspect of the claims process, from FNOL through closure. A very important element in the entire process is the customer experience. Features such as video chatbots that can interact with customers on simple processes, voice analytics that sense customer mood and behavioral analytics that predict customer needs not only increase efficiency but also create a stronger connection between insurers and their customers. Chatbots became sociable Consumers spend a lot of time on social media, so Progressive Insurance is meeting up with them online. In October 2017, the company announced the launch of its artificial intelligence chatbot on Facebook Messenger. Prospective and current customers can access the chatbot through the company’s Flo’s Facebook fan page, thereby extending Progressive’s reach to upward of 1.4 billion active Messenger users. Including these types of AI features in customer outreach capabilities is likely to become standard practice, as insurers continue to digitally transform their businesses to remain competitive and better serve their customers. Claims apps became integral to the claims processing digital transformation While undergoing a digital transformation is, in some aspects, unique to each insurer, a simple three-step strategy serves as a useful common template. Step three in this strategy is embracing the right tools to meet business needs. Insurers now have a broad suite of options and applications available to help them digitally transform their organizations for the best outcomes. Options include mobile capture, process intelligence, customer communications management, robotic process automation and case management. Telematics entered its next phase Telematics has proven itself as a foundation for usage-based insurance, providing valuable information to the insurer while helping customers become safer drivers and reduce their premiums. But the technology is far short of maturity in terms of the broader value it can offer both customers and insurers. This forward-thinking LexisNexis Insurance Insights blog post explains the telematics opportunities that lie ahead in the areas of customer care, process improvements and better data to drive stronger decision making. For example, telematics shows great promise for driving more efficient claims management as well as helping to prevent fraud through real-time alerts and an expedited claims cycle. See also: Global Trend Map No. 6: Digital Innovation   Accessing police records became easier and more accurate than ever Police reports have long been an integral part of claims processing. However, accessing the reports and rekeying important information from them is time-consuming, can result in inaccuracies and doesn’t take into account the future value of the data. Automated police record retrieval has changed all that. Claims adjusters can now instantly order and retrieve police report data in real time, then automatically integrate that data and its data elements not only into an existing claim but also into the claims system for future retrieval. These capabilities create greater efficiencies and also enable insights that can drive future decision making. Automated police record retrieval promises to be a game-changer for the industry. AI transformed the customer experience Often associated with a negative event (like an auto accident or a personal property loss), contacting an insurance agent is typically not top on the list of things customers want to do. AI is changing all that. 2017 marked the year that AI came into its own within the insurance industry in a number of ways, including providing a much more compelling and satisfying customer experience. For example, the insurance industry is exploring multiple ways to leverage AI in claims to enhance the customer experience and shave days off claims processing time. Additionally, AI enables personalization that enhances the customer relationship without any human interaction required. Clearly, 2017 was the year of innovation implementation for claims. With so many promising new technologies and capabilities gaining traction and establishing a solid foothold within the industry, the future looks very bright. We expect to see acceleration in claims automation during 2018 as companies build on the technology advances of the 2017.

Quantum Computers: Bigger Than AI?

A lot of good will come from quantum computing, including better weather forecasting, but it could also open up a Pandora’s box for security.

Elon Musk, Stephen Hawking and others have been warning about runway artificial intelligence, but there may be a more imminent threat: quantum computing. It could pose a greater burden on businesses than the Y2K computer bug did toward the end of the ’90s. Quantum computers are straight out of science fiction. Take the “traveling salesman problem,” where a salesperson has to visit a specific set of cities, each only once, and return to the first city by the most efficient route possible. As the number of cities increases, the problem becomes exponentially complex. It would take a laptop computer 1,000 years to compute the most efficient route between 22 cities, for example. A quantum computer could do this within minutes, possibly seconds. Unlike classic computers, in which information is represented in 0’s and 1’s, quantum computers rely on particles called quantum bits, or qubits. These can hold a value of 0 or 1 or both values at the same time — a superposition denoted as “0+1.”  They solve problems by laying out all of the possibilities simultaneously and measuring the results. It’s equivalent to opening a combination lock by trying every possible number and sequence simultaneously. Albert Einstein was so skeptical about entanglement, one of the other principles of quantum mechanics, that he called it “spooky action at a distance” and said it was not possible. “God does not play dice with the universe,” he argued. But, as Hawking later wrote, God may have “a few tricks up his sleeve.” See also: The Race to Quantum Computers   Crazy as it may seem, IBM, Google, Microsoft and Intel say that they are getting close to making quantum computers work. IBM is already offering early versions of quantum computing as a cloud service to select clients. There is a global race between technology companies, defense contractors, universities and governments to build advanced versions that hold the promise of solving some of the greatest mysteries of the universe — and enable the cracking open of practically every secured database in the world. Modern-day security systems are protected with a standard encryption algorithm called RSA (named after Ron Rivest, Adi Shamir and Leonard Adleman, the inventors). It works by finding prime factors of very large numbers, a puzzle that needs to be solved. It is easy to reduce a small number such as 15 to its prime factors (3 and 5), but factoring numbers with a few hundred digits is extremely hard and could take days or months using conventional computers. But some quantum computers are working on these calculations, too, according to IEEE Spectrum. Quantum computers could one day effectively provide a skeleton key to confidential communications, bank accounts and password databases. Imagine the strategic disadvantage nations would have if their rivals were the first to build these. Those possessing the technology would be able to open every nation’s digital locks. We don’t know how much progress governments have made, but in May 2016 IBM surprised the world with an announcement that it was making available a five-qubit quantum computer on which researchers could run algorithms and experiments. It envisioned that quantum processors of 50 to 100 qubits would be possible in the next decade. The simultaneous computing capacity of a quantum computer increases exponentially with the number of qubits available to it, so a 50-qubit computer would exceed the capability of the top supercomputers in the world, giving it what researchers call “quantum supremacy.” IBM delivered another surprise 18 months later with an announcement that it was upgrading the publicly available processor to 20 qubits — and it had succeeded in building an operational prototype of a 50-qubit processor, which would give it quantum supremacy. If IBM gets this one working reliably and doubles the number of qubits even once more, the resultant computing speed will increase, giving the company — and any other players with similar capacity — incredible powers. Yes, a lot of good will come from this, in better weather forecasting, financial analysis, logistical planning, the search for Earth-like planets and drug discovery. But quantum computing could also open up a Pandora’s box for security. I don’t know of any company or government that is prepared for it; all should build defenses, though. They need to upgrade all computer systems that use RSA encryption — just like they upgraded them for the Y2K bug. Security researcher Anish Mohammed says that there is substantial progress in the development of algorithms that are “quantum safe.” One promising field is matrix multiplication, which takes advantage of the techniques that allow quantum computers to be able to analyze so much information. Another effort involves developing code-based signature schemes, which do not rely on factoring, as the common public key cryptography systems do; instead, code-based signatures rely on extremely difficult problems in coding theory. So the technical solutions are at hand. See also: Dark Web and Other Scary Cyber Trends   But the big challenge will be in moving today’s systems to a “post-quantum” world. The Y2K bug took years to remediate and created fear and havoc in the technology sector. For that, though, we knew what the deadline was. Here, there is no telling whether it will take five years or 10, or whether companies will announce a more advanced milestone just 18 months from now. Worse still, the winner may just remain silent and harvest all the information available.

Vivek Wadhwa

Profile picture for user VivekWadhwa

Vivek Wadhwa

Vivek Wadhwa is a fellow at Arthur and Toni Rembe Rock Center for Corporate Governance, Stanford University; director of research at the Center for Entrepreneurship and Research Commercialization at the Pratt School of Engineering, Duke University; and distinguished fellow at Singularity University.

How to Disrupt Drug Prices

The pharmaceutical pricing system is opaque and perverse — and it’s the only system we know. But it doesn’t have to be this way.

These days, we’re not surprised to open the paper and see another headline about the latest Epi-pen, Martin Shkreli or yet another new drug with an exorbitant price tag that has no basis in reality. Since Sovaldi, a pill to treat Hepatitis C, hit the market at a price point of $1,000 (never mind that you could purchase it for $4 per pill in India), it has become acceptable for mass-market therapies to suddenly become very expensive — often to the tune of $100,000 per therapy per patient per year. So it might blow your mind to open up a newspaper (or your web browser) and learn that a new, more effective drug is significantly cheaper and better, especially if it is a cure for Hepatitis C. Mavyret, manufactured by AbbVie, is the first example of a new, brand name Hepatitis C drug that is actually better for patients and costs far less. Eighty percent of patients with Hep C can do an eight-week course, versus the alternative, manufactured by companies including Gilead and Merck, which requires a 12-week course. Mavyret is the only drug that works for genotypes 1-6, and it has a list price that is less than half of what the competitors charge, even when you factor in the bizarre middleman shenanigans. It ends up costing about $26,000 to cure a patient of Hep C. If that sounds high, just consider the fact that all the specialty meds for chronic conditions such as psoriasis are now $50,000-100,000 or more per year! Mavyret sounds too good to be true, right? In the world of specialty pharmaceuticals — an intentional labyrinth of perverse financial incentives, with zero transparency for the payer or patient — it is actually not too good to be true. But clients and their employees probably still won’t reap the benefits. Unfortunately, our current system probably locks them into paying more for a drug for employees that is less effective, even though a cheaper alternative exists. See also: Stop Overpaying for Pharmaceuticals   Most of our efforts to manage pharmacy costs rely on working with a pharmacy benefit manager or PBM, which uses strategies like formulary management, prior authorization and step therapy. PBMs are, as Bloomberg News explains, “the middlemen with murky incentives behind their decisions about which drugs to cover, where they’re sold and for how much.” For starters, your PBM may have contracted to have the more expensive drug on their formulary because that manufacturer offers them better rebates. This decision, of course, is not based on what is most effective for the patient, or cheaper for the payer. It is based on the formulary and written into the contract, so you are stuck with it. And with the bizarre economics of rebates for manufacturers, Gilead and other makers of Hep C drugs can argue that their post-rebate prices are only 50% to 60% of their list price, so they really aren’t committing too much highway robbery. The New York Times recently reported that, contrary to conventional wisdom, an increasing number of patients are being steered away from lower-cost generic drugs toward brand name alternatives because this is a better financial arrangement for the PBM, thanks to steep rebates from manufacturers trying to “squeeze the last profits from products that are facing cheaper generic competition.” We feel the financial pain of this broken system every day. Only a few years ago, specialty drugs composed a reasonable-sounding 10% of our overall drug spending. Last year, specialty dug spending bloated to 38% — and by 2018, it will be an astounding 50%, which is an increase of $70 million a day! The system is opaque and perverse — and it’s the only system we know. But it doesn’t have to be this way. Almost two decades ago, the internet revolution made the travel agency obsolete. Uber and Lyft have done the same thing to parts of the transportation industry, and Amazon continues to do this to many industries. What have all of these disruptive innovations taught us? They have taught us that we might, in fact, be able to make better decisions ourselves, without a middleman. It is time for this type of disruptive innovation to hit the pharmacy benefit world. Today we have a system that focuses on controlling suppliers: PBMs that use profitability levers like formulary management, prior authorization and step therapy that, in reality, just limit our choices and prevent the functioning of a real market. See also: Open Letter to Bezos, Buffett and Dimon What if instead of focusing on supply, we focus on value? What if we begin by asking “is this drug really working for this patient? How well? And how does it compare with the alternatives?” This scenario represents an incredible opportunity for better health outcomes and savings compared with the status quo. What makes sense and saves cents at the same time isn’t being locked into a formulary choice, but the brave new world of opportunity for employees to get well sooner and pay less for a therapy like Mavyret. As a benefits professional, breaking out of the status quo isn’t easy. After all, 10 years ago, it wouldn’t have worked to turn to your travel agent, taxi cab driver or storefront manager and tell them, “I need a new model for your industry, and I don’t need you anymore.” Organizations that have a bit of flexibility to experiment can, should and will be the early adopters of better ways. Some already exist. Don’t settle for the status quo. Keep asking vendors, “What have you done for my clients and their employees today?”

Pramod John

Profile picture for user PramodJohn

Pramod John

Pramod John is the founder and CEO at Vivo Health. Pramod John is team leader of VIVIO Health, a startup that’s solving out of control specialty drug costs; a vexing problem faced by self-insured employers. To do this, VIVIO Health is reinventing the supply side of the specialty drug industry.

Increased Flood Exposure From Fires

What is often overlooked in California's wildfires is the threat of flood – not covered by homeowner’s insurance - that the fires create.

California wildfires are a fairly frequent occurrence: The five-year average of wildfires has been 4,770, with around 202,705 acres burned, but in 2017 there were 6,877, with 505,733 total acres burned, according to the California Department of Forestry and Fire Protection. While these storms are devastating and heart-breaking, in almost all cases insurance supports recovery. What is often overlooked is the threat of flood – which is not covered by homeowner’s insurance - that wildfires create. By being aware of how flood exposure arises from wildfires, and knowing the insurance options available, homeowners can take action to be prepared should an event occur. What causes flooding from wildfires Flooding from wildfires can arise in three ways:
  • Landscape and runoff flow — The wildfire changes the characteristics of the landscape and increases the chances of flooding from runoff due to:
    • Vegetation — When it rains, trees and plants that cover the ground catch a certain portion of rain on their branches and leaves, which slows accumulation at the ground level and lets the moisture simply evaporate. Wildfires remove this natural protection.
    • Removal of litter — Litter (leaves and pine needles on the ground) normally allow for absorption of water, again slowing the runoff of water. In a wildfire, litter is eliminated.
    • Scarring — One of the major and most dangerous effects of wildfires is that ash and burnt top soil, which are water-repellent, are left behind. This has the same effect as taking an area and covering it with concrete – there will be a big increase in runoff of water, and normal drainage sources will quickly fill up.
    • It will take approximately five years before all of the vegetation is replenished and the soil begins to return to its original state.
  • Mudflow — Because so much of the vegetation is removed in a fire, the potential for mudflow is greatly increased. In some cases, mudflow is a covered peril under a flood policy as long as the mudflow has a milkshake consistency. If not, the flow is considered a landslide, which is not covered. Mudflow-type claims normally are higher-value claims because they are even more destructive than a normal flood.
  • Debris — Ash and debris from wildfires can cause problems with river and stream runoff. Debris is carried into the stream or river, causing natural dams and water buildup, which will push water out of the banks and can flood areas that have never flooded before.
See also: How to Fight Growing Risk of Wildfire   The importance of understanding what a 100-year floodplain is A 100-year floodplain is based on a statistical probability needed by the insurance industry as a standard on which to base policies. The process of determining a floodplain is based on scientific measurements that are then put through a formula/statistical model to generate an estimate of how often a flood event could occur. For a 100-year floodplain, there is a "one-in-100 chance (1%) flooding may occur in any given year, or a "return period" of once every 100 years," according to FloodSafety.com.  It’s important for homeowners to understand this language so that they can have a better understanding of what their flood exposure is estimated to be. Understanding the application of flood maps Homeowners should take caution when looking at a flood map to determine their flood exposure, especially if they live in an area affected by wildfires. Flood maps are created using data gathered from many sources. Assumptions and statistics are applied to determine if in any one year there is a 1-in-100 chance that you will have a flood that will cover a certain area. Any major changes in the data – such as the occurrence of a wildfire - will inevitably affect the chance of a flood. What’s more, floodplains on a flood map cover a very large area. This means that properties can also be at higher or lower risk within a floodplain depending on where in the floodplain they are located. For example, one house in the 25-year floodplain may flood two feet deep during a storm, but a neighbor deeper in the floodplain may flood six feet deep. The bottom line — knowing how to apply a flood map to particular circumstances is a huge advantage for homeowners when determining their specific risk and what steps to take to mitigate those risks. What can homeowners do? Homeowners have several steps at their disposal that they can consider taking to protect their homes from flood exposure due to wildfires:
  • Use online tools to help evaluate the risk of flooding. These tools, such as those found here, can determine a flood risk rating score based on location, show the average number of claims, associated costs and more.
  • Purchase flood insurance. Flood insurance is not included in a standard homeowners policy. In many cases, flood insurance can bring peace of mind in regard to protecting a home. Many people feel they do not have a flood risk or cannot get flood insurance because they are not in a flood zone. That is simply not true. The only thing that will prevent a homeowner from getting flood insurance is if they are in a designated wildlife area (CBRA) or if their community does not participate in Federal Flood Plain management programs.
See also: 2018 Workers’ Comp Issues to Watch   Conclusion While it is not known what weather patterns will do over the next couple of years, we do know that there is a large population of people in California who have increased flood exposure. Informed homeowners who take the necessary steps to be prepared will be able to effectively deal with their flooding exposure and an event should one occur. This article is provided for general informational purposes only and is not intended to provide individualized business, insurance or legal advice. You should discuss your individual circumstances thoroughly with your legal and other advisers before taking any action with regard to the subject matter of this article.

Terry Black

Profile picture for user TerryBlack

Terry Black

Terry Black is vice president of business development for Aon National Flood Services, with nearly 20 years’ experience in the flood insurance industry and more than 30 years’ experience in property/casualty insurance.

Innovation Is Really Happening

It’s hard to change when you’re going 100 mph every day. Yet, despite challenges across the industry, a lot of innovation models are appearing.

We talk about innovation all time. Yes, it is mandatory. Yes, your competitors are doing it. Yes, it means fundamental changes within an organization. But what does innovation really mean, and where do you start? An innovation is defined as a new method, process or product. For insurance, the quest to build something new or transforming has quickly become foundational. The focus on innovation has naturally created an industry-wide platform that pushes us to rethink, redesign, reimagine and reinvent our roles, structures, products, services, processes and technologies. We’ve been talking about and tracking innovation for several years now. And creating a culture of innovation is probably still the biggest hurdle. We are operationally focused. We are successful. We’re financially strong. It’s hard to change what you’re doing right. It’s also hard to change when you’re going 100 mph every day. Yet, despite challenges across the industry, we’re seeing a lot of innovation models appear. And in those models, we see segments of change and real action on innovation. See also: Linking Innovation With Strategy   What’s fascinating about 2017 is that almost all insurers in our research (more than 90%) have some type of innovation happening across their companies. It has taken hold, even at a regional- and small-insurer level. It’s happening, from the $500 million companies all the way down to the $50 million companies. It takes the forms of what could either be called an innovation initiative, a digital strategy of which innovation is a part or tracking and partnering with insurtechs and emerging technologies with innovation as a part of that. In other cases, innovation is disguising itself in the strategic initiatives. Or, it’s just flat-out called an initiative within the company in its own right. About half of the insurers are still in the development phases of innovation, trying to define it, trying to figure out the process, the people, how to invest in it. And the other half are maturing. Some of those have been creating and evolving innovation labs for three, five or even seven years. I remember our first SMA Summit, when we gave an Innovation in Action Award to Allstate for creating an innovation lab. It was just the beginning of insurers thinking of innovation in a big way. Now, innovation labs are virtually everywhere, on every scale, inside and outside the structure of the insurance company walls. As an industry, and for most insurers, it’s still a struggle to create innovative thinking and tie it back to strategies and operations – or to flip the approach and take our business strategies and tech investments and innovate those. Or to really find the value for the operations, or to see the value from a customer or agent/broker perspective. At the end of the day, insurers are still required to pay a claim, make money and provide dividends to either their shareholders. Right now, it’s a balancing act between fulfilling the needs of the daily business and progressing into the future with innovation initiatives. See also: Global Trend Map No. 6: Digital Innovation   This year’s SMA Summit is scheduled for Sept. 17, 2018, and the theme is Transformation in Action. I can’t wait to hear the stories shared about the innovations realized. We have come such a long way in so short a time. It’s amazing to think that, just seven years ago, the thought of an innovation lab was groundbreaking. Finally, if you are just beginning, or unsure where to start, remember that starting small works! Just start somewhere. It may take a leap of faith to acknowledge that there are places in your organization that need to be transformed. But today, there is more support than ever – from all of us at SMA, from customers, from solution providers and even from competitors. The time is now. Just jump in! Read our latest research report on innovation: Innovation is Mandatory: SMA Annual Innovation in Insurance.

Deb Smallwood

Profile picture for user DebSmallwood

Deb Smallwood

Deb Smallwood, the founder of Strategy Meets Action, is highly respected throughout the insurance industry for strategic thinking, thought-provoking research and advisory skills. Insurers and solution providers turn to Smallwood for insight and guidance on business and IT linkage, IT strategy, IT architecture and e-business.

Top Challenge for HR Teams in 2018

In 2018, the key word is going to be “agile,” and human resource teams will be responsible for making it a reality across the organization.

We all have that colleague who overuses buzz words and phrases such as: "synergy," "deep dive" and "low-hanging fruit." Yet, every now and again, a buzz word arises that actually affects your business. In 2018, that word is going to be “agile,” and human resource teams will be responsible for making it a reality across the organization. Business agility is the ability to adapt and respond rapidly to changes in the environment while sustaining success. Depending on the size and stage of your organization, agility may require bold internal policy transformations, a new approach to hiring or paradigm shifts in company culture. Agile isn’t new, so why will it creep up the challenges list in 2018? With questions swirling around whether the economy can sustain its momentum, uncertainty has again set in. Organizations with the ability to quickly adapt to changing business and economic conditions, companywide and HR-focused objectives – particularly those focused on driving growth – won’t have to be compromised. Yet nearly 700 executives reported having little confidence in their companies’ ability to quickly mobilize in response to market shifts. More than 50% of those surveyed did not believe that their culture was adaptive enough to respond. See also: The Human Resources View Of Health Care Benefits Needs To Change  To solve this quandary, CEOs are turning to HR leaders. CEOs are providing a mandate to create a workforce and cultivate a culture that is agile and nimble enough to capitalize on new opportunities and overcome deep-rooted organizational challenges. At Peak Sales Recruiting, we work with HR leaders from some of the most innovative and disruptive companies. Based on their collective experiences and the latest studies, we have compiled three ways human resource and talent acquisition teams can harness the power of agility to drive organizational performance in 2018: 1. Learn from Tom Cruise in "Mission Impossible:" In the popular movie franchise, Cruise leads IMF, a self-managed team that operates outside of government bureaucracy to save the U.S. from evil. In the real world, many leading companies such as Microsoft, Spotify and Airbnb have employed similar strategies. For example, "Scaling Agile @ Spotify" formed cross-departmental teams that functioned like a startup, with Spotify as the incubator. The teams – Tribes and Guilds – had delegated decision-making autonomy on key service and product development initiatives. Their work has been so important that they’ve helped the company keep giants like Apple from stealing market share in a growth segment. Bain research studied 300 large corporations worldwide and found that the top quartile’s key to success was that they spent 50% less time on unnecessary and ineffective collaboration. That is why small, talented teams that work outside of traditional hierarchical management systems can solve mission-critical issues, faster. To be more agile in 2018, HR departments must work closely with C-suite executives on empowering middle and front-line leaders to build nimble, cross-functional teams. 2. Transform internally at scale: A Korn Ferry Institute study found that increasing investment in aligning HR practices with business objectives resulted in a 7.5% decrease in employee turnover and, on a per-employee basis, $27,044 more in sales, $18,641 more in market value and $3,814 more in profit. But in tech, HR departments have traditionally struggled to be aligned with rapid go-to-market plays and shorter product-development cycles. Workforce policy development, talent acquisition tactics and competency and performance review initiatives weren’t in sync with the changing pace of business. Using agility as its foundation, one company approached the problem differently. McKinsey recently released a case study called ING’s Agile Transformation. Recognizing the importance of agility at scale, ING made 3,500 employees re-interview for their jobs. Remarkably, 40% of them ended up in new positions within the company or were let go. In many cases, the change was not because of an employee’s skill set. The issue was whether employees could embrace the constant state of change that financial service and fintech organizations need to remain competitive. ING’s HR department led an unprecedented overhaul of their organization, and this experience can be replicated across organizations in 2018. 3. Hire change agents with a versatile skill set: The phrase, “We have always done it this way,” is the kryptonite to success. Economic volatility and advances in technology, in certain cases, render last year’s business plan obsolete. While it is not necessary to reinvent the wheel every time, HR departments must evolve their competency models to hire people possessing change agent traits and experiences. Steve Jobs famously said that, if you want to hire change agents, hire pirates. He wanted employees who would challenge the status quo and were flexible, diverse, passionate and results-oriented. If HR departments do not bring in fresh people and ideas, the company will fail to improve in 2018. See also: Hacking the Human: Social Engineering Due to rapid changes in technology and the global economy, business agility will be a key differentiator between company failure and success. Being agile while maintaining company values and processes is a tightrope that CEOs are asking human resource professionals to figure out. Never has a buzz word meant so much.

Keith Johnstone

Profile picture for user KeithJohnstone

Keith Johnstone

Keith Johnstone is the head of marketing at Peak Sales Recruiting, a leading B2B sales recruiting company launched in 2006. Johnstone leads all marketing activities and has successfully grown revenue and lead volume every quarter.

A Growing Challenge: Managing Talent

Savvy businesses are responding to a tight labor environment by reevaluating their recruitment, retention and compensation practices.

sixthings
Recruitment and retention of sufficient workers presents a growing challenge for many U.S. businesses in manufacturing, construction and many other segments of the economy. Competition for workers continues to grow as the improving economy drives down unemployment and applies pressure on employers to increase wages. U.S. Bureau of Labor Statistics November employment statistics, for instance, showed employment continued to trend up in professional and business services, manufacturing and healthcare. While most businesses welcome the uptick in business opportunities, the pressure to increase wages threatens the ability of many of these businesses to take full advantage of these new opportunities. While welcoming the strengthened manufacturing economic performance, the National Association of Manufacturers says manufacturers continue to say that the inability to attract and retain a quality workforce is one of their top concerns. Employers in the healthcare and services industry increasingly are reporting similar challenges. With tightening immigration standards making it more difficult to close gaps with foreign labor, savvy businesses are taking the initiative to respond to this changing labor environment by reevaluating their recruitment, retention and compensation practices. In addition to looking to recruit new workers from the ranks of the under- and unemployed, many businesses increasingly are looking to recruit employed workers from other employers by offering sweeter compensation, work-life balance, promotion or other sweetened employment opportunities. Businesses competing for the same workers will want to review their existing employment and compensation packages to help promote their ability to recruit workers and to retain existing workers. See also: What Is the Business of Workers’ Comp?   In recognition that other businesses may target their best workers, businesses should shore up their compensation and retention practices and strengthen their noncompetition, trade-secret and other critical workplace protections to guard against disruptions from loss of key personnel. When conducting these activities, businesses should not rely on past legal experience. Federal and state law has evolved significantly regarding noncompetition, trade secret and other business intelligence safeguards. Businesses that have not done so in the past year should consider engaging experience counsel to review their existing policies and practices for possible witnesses and opportunities for enhanced strength. Businesses also may want to discuss opportunities for bonus or other golden handcuffs compensation packages to give key workers incentives to stay with the organization. Employers also should recognize that departing employees may take advantage of opportunities to air resentments. In the face of these risks, employers will want to ensure that their existing wage and hour, harassment, safety and other workforce policies and practices are currently compliant as well as be prepared to respond to any allegations of past misconduct. Employers should carefully conduct exit interviews and investigate any alleged misconduct or other negative feedback to mitigate potential risks and liabilities. Employers also should consult with experienced employment and employee benefits counsel about appropriate design, administration and documentation of these policies, practices,i arrangements and activities.

Cynthia Marcotte Stamer

Profile picture for user CynthiaMarcotteStamer

Cynthia Marcotte Stamer

Cynthia Marcotte Stamer is board-certified in labor and employment law by the Texas Board of Legal Specialization, recognized as a top healthcare, labor and employment and ERISA/employee benefits lawyer for her decades of experience.