Download

Most Controversial Claims Innovation

The innovation -- claims optimization -- requires quite a number of radical assumptions about what insurance is there to do and how it should do it.

sixthings
Pushing at boundaries and challenging traditional notions is what innovation is all about. Rethinking something that people have taken for granted for too long can open opportunities for enhancing service, increasing efficiencies and generating revenue. Take claims. The idea that the claim settlement should reflect the insured loss has held sway for a great many years. Is it time for innovators to knock this one off its pedestal? It looks like that time has come, for insurers in the U.S. are starting to turn to claims optimization. To explain this, I should start with the perhaps more familiar concept of price optimization. The price version has the insurer setting the premium for insuring a risk according to what the policyholder is prepared to pay, rather than the level of risk that the policy is presenting. It involves using big data to work out the price at which particular types of customer will start to look for alternative quotes, and then progressively raising the premium to just below that amount. See also: Innovation Challenge for Commercial Lines   A leading figure in the U.K. insurance market recently described price optimization to me as “recognizing the true lifetime value of the customer and reflecting that in a better price.” It’s a nice quote, so long as you’re the insurer on the receiving end of that "lifetime value" and that "better price" and not the customer paying for them. I’ve yet to hear how that description fits in with the regulator’s interest in fair outcomes for customers. It would struggle to do so, for reasons I explain in this paper I wrote for the U.K.’s Chartered Insurance Institute last year. In short, the nature of insurance makes price optimization highly questionable on ethical grounds. It’s hardly surprising that a recent PwC survey found that 72% of insurance CEOs think it will be harder to sustain trust in a digitized market. With claims optimization, insurers would seem to be abandoning all remaining hope of sustaining trust in a digitized market. Claims optimization involves using big data to establish the amount that a particular claimant would be prepared to accept as settlement of the claim. So if all those algorithms pinpointed the claimant as someone in financially tight circumstances, then the settlement offered to that claimant would be optimized to reflect a greater and more immediate need for cash. This would involve tweaking the comparative speed of a cash settlement versus a replacement service and setting relative offers to achieve the optimized position. This still falls neatly within that description of optimization as “recognizing the true lifetime value of the customer and reflecting that in a better price.” Indeed, an insurer happy to optimize on price would hardly need to bat an eyelid at optimizing that other value determinant: claims expenditure. After all, the insurer would say, if a claimant is prepared to accept that lower settlement, why shouldn’t the insurer offer it? Let’s be quite clear: Claims optimization is an exploitation of the unequal balance of information and economic power between a consumer and an insurer. It is unprofessional, and it is unethical. Might this view perhaps point to me being a naysayer on innovation? Not at all – I have a postgraduate degree from one of the earliest courses in the U.K. on the study of scientific and technological innovation. That course taught me to recognize the multi-sided nature of innovation and to see that it is not some nature force of business evolution, but a mix of social, economic, philosophical and technological drivers. See also: The Great AI Race in Insurance Innovation   Despite what vendors of "big data solutions" say about optimization, it is not a natural next step for underwriting and claims. It is home to a number of quite radical assumptions about what insurance is there to do and how it should do it. It would be nice to see some of those assumptions brought out into the open and debated, but, unless Andrew Bailey at the FCA follows through on some of his hesitations about price optimization, then I fear such a debate is unlikely. I have heard some chief executives at leading insurers wonder whether price optimization was the right thing to do. If, as it would appear, they no longer have any such qualms on the pricing side, then it is but a small step to their being equally relaxed about using it on  the claims side. In that case, I look forward to listening to them explain their reasoning for doing so on a public forum like, say, the BBC Radio’s Today program.

Duncan Minty

Profile picture for user DuncanMinty

Duncan Minty

Duncan Minty is an independent ethics consultant with a particular interest in the insurance sector. Minty is a chartered insurance practitioner and the author of ethics courses and guidance papers for the Chartered Insurance Institute.

The Bad Actors Among Lead Sellers

Bad actors cause significant problems for lead buyers, but industry-wide use of technology can tackle the issue.

sixthings
In today’s online shopping environment, lead sellers and lead-buying marketers alike work well together toward the same goal of delivering a great customer experience; however, they often struggle with an array of challenges along the way. The Evolution of the Lead Seller-Lead Buyer Relationship The relationship between online lead sellers and lead buyers began as a very simple one, in the late '90s. The lead seller was also the lead generator—who dealt directly with the lead buyers. But it wasn’t long before small- and medium-sized publishers realized they could make money by generating leads that they would then sell to the large lead generators that had direct relationships with the lead buyers. This resulted in lead generators that would also aggregate leads generated by smaller publishers. By the mid-2000s, the large generators and aggregators began to monetize individual leads further by sharing them with other large generators and aggregators that had unique end-buyer relationships. Initially, this transpired in a private, trusted transaction environment with strict rules in place that were easily enforced. Two companies worked together to maximize the monetization of each other’s leads. Collectively, they made more money, and the consumer had more options available. It was a win-win-win for everyone in the ecosystem. As more of these private sharing arrangements developed (driven by the additional monetization opportunity), the technology evolved to support it. Hence the ping post ecosystem, which publishers, aggregators and generators leverage to best monetize leads. Much like in a stock exchange, sellers and buyers come together to create an efficient market with a variety of options for the consumer, and the highest bidders are typically the entities with the best consumer offer. See also: Changing Business Models, ‘New’ ERM   The Present State of the Lead Seller-Lead Buyer Relationship In insurance, the relationship between sellers and buyers is generally strong, as long as the publishers, aggregators and generators play by the rules. For example, leads should only get sold to a certain number of buyers in a shared-lead world, and exclusive leads should only be sold to one buyer. Other examples of rules include:
  • No manipulation of the consumer data
  • No recycling of leads later
  • No fake leads
  • No non-TCPA-compliant leads
  • No incentive-driven leads
  • No unauthorized sales to stated end-buyers
It is the “bad actors” that don’t play by these rules in the ping post ecosystem that can cause significant problems. We can look to the evolution of the mortgage and education verticals to learn how to solve these problems. Solutions for Today’s Lead Seller Challenges TCPA Compliance. One of the most stressful challenges that lead sellers face today is TCPA compliance. Given that TCPA case filings increased more than 940% between 2010 and 2015, coupled with the fact that consumers are being encouraged to file suits by some law firms, the TCPA has become a huge hurdle for sellers to overcome. Both lead sellers and buyers must avoid exchanging non-TCPA compliant leads and make sure they have persuasive evidence of consent in the event they, or end-buyers, face a complaint or lawsuit. Measuring Consumer Intent. Another challenge sellers face is gaining the ability to measure the intent of each consumer. With insights into the individual consumer journey, you gain the ability to measure the intent—and therefore the value—of each lead. There are technology solutions available that enable you to measure consumer intent. Those Bad Actors Not Playing by the Rules. Many lead buyers are actively leveraging technology to validate consumer data as “good data” and some using de-duping solutions to minimize buying the same lead twice. In the ping post ecosystem, much of the data on the origin and history of a lead is “contributed data." The challenge of eliminating old or recycled leads, dupes, fake and no-intent leads stems from a lack of ability to verify that “contributed data” as fact. For example, an insurance lead aggregator buys a lead from another aggregator or from a generator and agrees to only sell the lead once and only to one, specific insurance provider. This is contributed data, but there is no way to validate it as fact. What sometimes happens is a bad actor will sell that lead to other insurance providers or hold it for a week or so then sell it again—a recycled lead. There is no transparency and little accountability. To validate contributed data as factual, you have to establish a “chain of custody” to verify that each lead seller participating in the ping post system is playing by the rules. Then, if there is ever a problem or complaint, you have data to help the lead generator or buyer that is experiencing a problem identify where in the chain the problem occurred and expose the bad actor. The Most Crucial Area for Improvement Improving Lead Value - To continually improve relations with their buyers, sellers always seek ways to cultivate greater value in their leads. The simplest solution is for sellers to distribute the highest-intent leads possible and do everything possible to eliminate selling “no intent” leads to their clients. To best accomplish this, sellers must require any upstream publishers and generators to adhere to the simple rules that sellers and buyers have established and have a mechanism in place to verify any contributed data surrounding exchanged leads. If anyone is still following the antiquated practices of a bad actor, it’s going to catch up to them eventually. See also: Developing Programs for Shifting Channels   Bad actors are bad news for the entire ecosystem, leaving a bad taste in the mouth of buyers, that can cloud relationships with reputable sellers and result in a deterioration of value to all participants. By exposing the bad actors, sellers can avoid a race to the bottom, ensure they deliver a great consumer experience and deliver high-intent leads—and the resulting growth opportunities—to buyers. Technology is available to the lead-gen industry today to enable the chain of custody and associated data trail. We encourage everyone to join this insurance industry initiative.

Jaimie Pickles

Profile picture for user JaimiePickles

Jaimie Pickles

Jaimie Pickles is co-founder and CEO at First Interpreter.

He was previously general manager, insurance, at Jornaya, which analyzes consumer leads for insurance and other industries.  Before that, he was president and founder of Canal Partner, a digital advertising technology company, and president of InsWeb, an online insurance marketplace.

Is There Risk in Embracing Insurtech?

How can the 2016 results at AIG be explained? Could the recent emphasis on technology, among other things, have caused a loss of focus?

sixthings
As insurers rush headlong into the digital scramble, they should keep in mind the proverbial iceberg. Not all the risks involved are strictly tied to the innovation itself. Certain ones are below the water level. Insurers actively participating in the digital revolution have done so in a variety of ways: 1) innovation labs, 2) insurtech accelerators with external partners, 3) investments in insurtech companies, 4) purchases of insurtech companies. These are reasonable approaches for staying current and competitive. However, there are some caveats that should be heeded. Focus Risk Insurance is not a simple business. Machines cannot be set to produce thousands of identical items, a sale is not final and competition is never at a low ebb. It is a complex business that relies on actuarial forecasting, capital models, complicated and multi-layered contracts, in many cases, and astute claims handling. Thus, companies must remain focused on the functions and metrics fundamental to the business, if they are to achieve good results. Over the years, the insurance industry has adapted to paradigm shifts of all types, for example: 1) automation of internal operations, 2) agent/broker electronic interface, 3) paperless environments, 4) increased transparency with regulators and 5) product development responsive to new risks such as cyber or supply chain disruption. Now, creating new ways to interact with stakeholders digitally and developing products that are fit for purpose in a digital world should be within the capability bounds of these same insurers. The caution is that insurers should not get so focused on their digital initiatives they lose proper sight of the basics of the business: underwriting, claims, actuarial, finance, customer service. Equally, insurers cannot lose sight of other disruptive forces in the environment such as climate change, terrorism and cyber threats. See also: Insurtech: Unstoppable Momentum   A piece appearing on AIR Wordwide’s website written by Bill Churney asks “Have You Lost Focus On Managing Catastrophe Risk?” He alludes to the fact that catastrophes have been light these past 10 years, which may cause inattention, and that many new insurance staffers were not working when Katrina, Andrew or Hugo hit, thus have no personal experience to tap for handling sizable events. A lack of focus on managing catastrophe risk could be critically detrimental for companies. And although there is nothing concrete to suggest that insurers have lost such focus, the question underscores the possibility of attention deficits. The need for continuous and careful attention to the rudimentary aspects of the business cannot be dismissed, even if they may not seem as exciting or timely as digital inventions. Within memory, there have been companies that allowed themselves to lose necessary focus. Some got so focused on mergers and acquisitions that core functions were not managed properly while the emphasis was on cross sales and economies of scale. Some got so intent on improving customer relations that business imperatives were ignored in favor of appeasing the customer, and some got so diversified that senior management did not have the bandwidth to manage the whole enterprise. How can the 2016 results at AIG be explained? Could the more recent focus on divestitures, staff changes and cuts, a drive to return dividends to shareholders and the CEO’s reported concentration on technology have caused it to lose its once unparalleled focus on profitable underwriting, rigorous claims handling and product innovation. Investment Risk With investments pouring into insurtech, it raises the question: What is left for anything else? Despite fintech investments starting to slow, KPMG reports, "There was a dramatic increase in interest in insurtech in Q3’16, and the trend is expected to continue. The U.S. was the top country in Q3’16 with 10 insurtech deals, valued at $104.7 million in total.” These numbers do not capture the many millions of dollars that insurers are investing in insurtech activities internally, of-course. As mentioned above, they are spending money to create dedicated innovation labs and accelerator programs and to launch other types of speculative insurtech projects. Many older projects have become operational, including new business unit or company startups, the introduction of bots on company websites, telematics in vehicles, digitized claims handling...and the list goes on. How does an insurer know when an investment in insurtech is enough or too much, thereby negating other necessary investments required by functions such as underwriting, claims or actuarial? The caution is not about doing an ROI (return on investment) analysis for a specific project. It is about doing an ROI analysis for the portfolio of projects that are vying for funding vis-a-vis the need to keep the company solvent while maintaining progress with the digital strategy. The larger the insurer, the more used it is to managing multiple priorities and projects. For mid-size to small insurers, this skill may be less developed, and they may face even greater risk of getting out of balance. Growth Risk Insurance is one of the few industries for which growth can be just as risky as no growth. Industry pundits have long claimed that new business performs about three points worse than policies already on the books. The difference between a company at a combined ratio of 99 compared with 102 can be quite significant. The causes for this phenomenon have to do with such factors as: 1) the potential for adverse selection, 2) the reason customers choose to change carriers and 3) the costs associated with putting new business on the books. These are not the only ones. It is harder for actuaries to predict the loss patterns for groups of customers for whom there is no history in the company’s database. See also: Infrastructure: Risks and Opportunities   If the reason for investing in insurtech is to increase new business written, insurers should be cautious about how much and what kind of new business they will write because of their insurtech enhancements. To the extent that insurtech enables insurers to hold on to existing business, the outcome is less risky. For example, it remains to be seen whether drivers who want to buy insurance by the mile are a better or worse risk pool than other drivers, whether those involved in the sharing economy, such as renting rooms in their homes, are more or less prone to loss than homeowners who do not rent rooms. Are small businesses that are willing to buy their coverage on-line likely to file a higher number of claims or a lower number compared with small businesses who use an agent? Do insurance buyers who are attracted to peer-to-peer providers have loss experiences at a different rate than those who are not attracted to such a model? Conclusion The march toward more digitization in the insurance industry will and must go forward. At the same time, insurers should be wise enough to realize and address underlying risks inherent in this type of aggressive campaign to modernize.

Donna Galer

Profile picture for user DonnaGaler

Donna Galer

Donna Galer is a consultant, author and lecturer. 

She has written three books on ERM: Enterprise Risk Management – Straight To The Point, Enterprise Risk Management – Straight To The Value and Enterprise Risk Management – Straight Talk For Nonprofits, with co-author Al Decker. She is an active contributor to the Insurance Thought Leadership website and other industry publications. In addition, she has given presentations at RIMS, CPCU, PCI (now APCIA) and university events.

Currently, she is an independent consultant on ERM, ESG and strategic planning. She was recently a senior adviser at Hanover Stone Solutions. She served as the chairwoman of the Spencer Educational Foundation from 2006-2010. From 1989 to 2006, she was with Zurich Insurance Group, where she held many positions both in the U.S. and in Switzerland, including: EVP corporate development, global head of investor relations, EVP compliance and governance and regional manager for North America. Her last position at Zurich was executive vice president and chief administrative officer for Zurich’s world-wide general insurance business ($36 Billion GWP), with responsibility for strategic planning and other areas. She began her insurance career at Crum & Forster Insurance.  

She has served on numerous industry and academic boards. Among these are: NC State’s Poole School of Business’ Enterprise Risk Management’s Advisory Board, Illinois State University’s Katie School of Insurance, Spencer Educational Foundation. She won “The Editor’s Choice Award” from the Society of Financial Examiners in 2017 for her co-written articles on KRIs/KPIs and related subjects. She was named among the “Top 100 Insurance Women” by Business Insurance in 2000.

7 Steps for Inventing the Future

If we can escape the focus on incremental innovation, computing pioneer Alan Kay says, we can gain “unbelievable leverage on the universe.”

sixthings
Alan Kay is widely known for the credo, “The best way to predict the future is to invent it.” For him, the phrase is not just a witty quip; it is a guiding principle that has yielded a long list of accomplishments and continues to shape his work. Kay was a ringleader of the exceptional group of ARPA-inspired scientists and engineers that created an entire genre of personal computing and pervasive world-wide networking. Four decades later, most of the information-technology industry and much of global commerce depends on this community’s inventions. Technology companies and many others in downstream industries have collectively realized trillions of dollars in revenues and tens of trillions in market value because of them. Alan Kay made several fundamental contributions, including personal computers, object-oriented programming and graphical user interfaces. He was also a leading member of the Xerox PARC community that actualized those concepts and integrated them with other seminal developments, including the Ethernet, laser printing, modern word processing, client-servers and peer-peer networking. For these contributions, both the National Academy of Engineering and the Association of Computing Machinery have awarded him their highest honors. I’ve worked with Alan to help bring his insights into the business realm for more than three decades. I also serve on the board of Viewpoints Research Institute, the nonprofit research organization that he founded and directs. Drawing on these vantage points and numerous conversations, I’ll try capture his approach to invention. He calls it a method for “escaping the present to invent the future,” and describes it in seven steps:
  1. Smell out a need
  2. Apply favorable exponentials
  3. Project the need 30 years out, imagining what might be possible in the context of the exponential curves
  4. Create a 30-year vision
  5. Pull the 30-year vision back into a more concrete 10- to 15-year vision
  6. Compute in the future
  7. Crawl your way there
Here's a summary of each step: 1. Smell out a need “Everybody loves change, except for the change part,” Kay observes. Because the present is so vivid and people have heavy incentives to optimize it, we tend to fixate on future scenarios that deliver incremental solutions to existing problems. To reach beyond the incremental, the first step to inventing the future is deep “problem finding,” rather than short-term problem solving. Smell out a need that is trapped by incremental thinking. In Alan’s case, the need that he sensed in the late '60s was the potential for computers to redefine the context of how children learn. Prompted by conversations with Seymour Papert at MIT and inspired by the work of Ivan Sutherland, J.C.R. Licklider, Doug Engelbart and others in the early ARPA community, Kay realized that every child should have a computer that helps him or her learn. Here's how he described the insight:
It was like a magnet on the horizon. I had a lot of ideas but no really cosmic ones until that point.
This led Kay to wonder how computers could form a new kind of reading and writing medium that enabled important and powerful ideas to be discussed, played with and learned. But, the hottest computers at the time were IBM 360 mainframes costing millions. The use of computers in educating children was almost nonexistent. And, there were no such things as personal computers. 2. Apply favorable exponentials To break the tyranny of current assumptions, identify exponential improvements in technological capabilities that could radically alter the range of possible approaches. In 1965, Gordon Moore made his observation that computing would dramatically increase in power, and decrease in relative cost, at an exponential pace. Moore’s prediction, which would become known as Moore’s Law, was the “favorable exponential” that Kay applied. Today, the fruits of Moore’s Law such as mobile devices, social media, cloud computing, big data, artificial intelligence and the Internet of Things continue to offer exponential advances favorable for invention. As I’ve previously written, these are make-or-break technologies for all information-intensive companies. But, don’t limit yourself to those. Kay is especially optimistic about the favorable exponential at the intersection of computer-facilitated design, simulation and fabrication. This is the process of developing concepts and ideas using computer design tools and then testing and evolving them using computer-based simulation tools. Only after extensive testing and validation are physical components ever built, and, when they are, it can be done through computer-mediated fabrication, including 3D printing. This approach applies to a wide range of domains, including mechanical, electrical and biological systems. It is becoming the standard method for developing everything, including car parts and whole cars, computer algorithms and chips, and even beating nature at its own game. Scientists and engineers realize tremendous benefits in terms of the number of designs that can be considered and the speed and rigor with which they can do so. These allow, Kay told me, “unbelievable leverage on the universe.” See also: To Shape the Future, Write Its History   3. Project the need 30 years out and imagine what might be possible in the context of the exponential curves 30 years is so far in the future that you don’t have to worry about how to get out there. Focus instead on what is important to have. There’s no possibility of being forced to demonstrate or prove how to get there incrementally. Asking “How is this incremental to the present?” is the “biggest idea killer of all time,” Kay says. The answer to the “incremental” question is, he says, is “Forget it. The present is the least interesting time to live in.” Instead, by projecting 30 years into the future, the question becomes, “Wouldn’t it be ridiculous if we didn’t have this?” Projecting out what would be “ridiculous not to have” in 30 years led to many visionary concepts that earned Kay wide recognition as “the father of the personal computer.” He was sure, for example, that children would have ready access to laptop and tablets by the late 1990s — even though personal computers did not yet exist. As he saw it, there was a technological reason for it, there were user reasons for it and there were educational reasons for it. All those factors contributed to his misty vision, and he didn’t have to prove it because 30 years was so far in the future. How might the world look relative to the needs that you smell out? What will you have ready access to in a world with a million times greater computing power, cheap 3D fabrication, boundless energy and so on? Remember, projecting to 2050 is intended as a mind-stretching exercise, not a precise forecasting one. This is where romance lives, albeit romance underpinned by deep science rather than pure fantasy. 4. Create a 30-year vision A vision is different from a mission or a goal. If the previous step was about romance, a 30-year vision is more like a dream. It is a vague picture of a desirable future state of affairs in that 30-year future. This is the step where Kay’s recognition that computers would be widely available by the late 1990s turned into a vision of what form those computers might take. That vision included the Dynabook, a powerful and portable electronic device the size of a three-ring notebook with a touch-sensitive liquid crystal screen and a keyboard for entering information. Here’s one of Kay’s early sketches of the Dynabook from that time. [caption id="attachment_25314" align="alignnone" width="450"] DynaBook Concept Drawing[/caption] The next illustration is Kay’s sketch of the Dynabook in use. He describes the scenario as two 12-year-olds learning about orbital dynamics from a version of “Space Wars” that they wrote themselves. They are using two personal Dynabooks connected over a wireless network. [caption id="attachment_25301" align="alignnone" width="500"] Children Using Dynabooks[/caption] Kay’s peers in the ARPA community had already envisioned some of the key building blocks for the Dynabook, such as LCD panels and an Internet-like, worldwide, self-healing network. (For a fascinating history of the early ARPA community, see Mitchell Waldrop’s brilliant book, "The Dream Machine.") For Kay, these earlier works crystallized into the Dynabook once he thought about them in the context of children’s education. As he described it,
The Dynabook was born when it had that cosmic purpose.
Laptops, notebook computers and tablets have roots in the early concepts of the Dynabook. 5. Pull the 30-year vision back into a 10- to 15-year lesser vision Kay points out that one of the powerful aspects of computing is that, if you want to live 10 to 15 years in the future, you can do it. You just have to pay 10 to 20 times as much. That’s because tomorrow’s everyday computers can be simulated using today’s supercomputers. Instead of suffering the limitations of today’s commodity computers (which will be long obsolete before you get to the future you are inventing), inventors should use customized supercomputers to prototype, test and evolve aspects of their 30-year vision. Pulling back into the 10- to 15-year window brings inventors back from the “pie in the sky” to something more concrete. Jumping into that “more concrete” future is exactly what Alan Kay did in 1971 when he joined the Xerox Palo Alto Research Center (PARC) effort to build “the office of the future.” It started with Butler Lampson and Chuck Thacker, two of PARC’s leading engineers, asking Kay, “How would you like us to build your little machine?” The resulting computer was an “interim Dynabook,” as Kay thought of it, but better known as the Xerox Alto. [caption id="attachment_25302" align="alignnone" width="500"] Xerox Alto[/caption] The Alto was the hardware equivalent of the Apple Macintosh of 1988, but running in 1973. Instead of costing a couple of thousand dollars each, the Alto cost about $70,000 (in today’s dollars). PARC built 2,000 of them — thereby providing Kay and his team with the environment to develop the software for a 15-year, lesser-but-running version of his 30-year vision. 6. Compute in the future Now, having created the computing environment of the future, you can invent the software. This approach is critical because the hardest thing about software is getting from requirements and specification to properly running code. Much of the time spent in developing software is spent optimizing code for the limitations of the hardware environment—i.e., making it run fast enough and robust enough. Providing a more powerful, unconstrained futuristic computing environment frees developers to focus on invention rather than optimization. (This was the impetus for another Kay principle, popularized by Steve Jobs, that “People who are really serious about software should make their own hardware.”) The Alto essentially allowed PARC researchers to simulate the laptop of the future. Armed with it, Kay was a visionary force at PARC. Kay led the Learning Research Group at PARC, and, though PARC’s mission was focused on the office environment, Kay rightly decided that the best path toward that mission was to focus on children in educational settings. He and his team studied how children could use personal computers in different subject areas. They studied how to help children learn to use computers and how children could use computers to learn. And, they studied how the computers needed to be redesigned to facilitate such learning. [caption id="attachment_25303" align="alignnone" width="500"] Children With Xerox Alto[/caption] The power of the Alto gave Kay and his team, which included Adele Goldberg, Dan Ingalls, Ted Kaehler and Larry Tesler, the ability to do thousands of experiments with children in the process of understanding these questions and working toward better software to address them.
We could have a couple of pitchers of beer at lunch, come back, and play all afternoon trying out different user interface ideas. Often, we didn’t even save the code.
For another example of the “compute in the future” approach, take Google’s driverless car. Rather than using off-the-shelf or incrementally better car components, Google researchers used state of the art LIDAR, cameras, sensors and processors in its experimental vehicles. Google also built prototype vehicles from scratch, in addition to retrofitting current cars models. The research vehicles and test environments cost many times as much as standard production cars and facilities. But, they were not meant for production. Google’s researchers know that Moore’s Law and other favorable exponentials will soon make their research platforms practical. Its “computing in the future” platforms allow Google to invent and test driving algorithms on car platforms of the future today. Google greatly accelerated the state of the art of driverless cars and ignited a global race to perfect the technology. Google recently spun off a separate company, Waymo, to commercialize the fruits of this research. Waymo’s scientists and engineers are learning from a fleet of test vehicles driving 10,000 to 15,000 miles a week on public roads and interacting with real infrastructure, weather and traffic (including other drivers). The developers are also taking advantage of Google’s powerful cloud-based data and computing environment to do extensive simulation-based testing. Waymo reports that it is running its driving algorithms through more than three million miles of simulated driving each day (using data collected by its experimental fleet). See also: How to Master the ABCs of Innovation   7. Crawl your way there Invention requires both inspiration and perspiration. Inspired by this alternative perspective of thinking about their work, researchers can much more effectively channel their perspiration. As Kay is known for saying, “Point of view is worth 80 IQ points.” PARC’s success demonstrates that even if one pursues a 15-year vision — or, more accurately, because one pursues such a long-term vision — many interim benefits might well come of the effort. And, while the idea of giving researchers 2,000 supercomputers and building custom software environments might seem extravagant and expensive, it is actually quite cheap when you consider how much you can learn and invent. Over five glorious years in the early 1970s, the work at PARC drove the evolution of much of future computing. The software environment advanced to become more user-friendly and supportive of communications and different kinds of media. This led to many capabilities that are de rigueur today, including graphical interfaces, high quality bit-mapped displays, what-you-see-is-what-you-get (WYSISYG) word processing and page layout applications. The hardware system builders learned more about what it would take to support future applications and also evolved accordingly. This led to hardware designs that better supported the display of information, network communications and connecting to peripherals, rather than being optimized for number crunching. Major advancements included Ethernet, laser printing, peer-to-peer and client server computing and internetworking. Kay estimates that the total budget for the parts of Xerox PARC that contributed to these inventions was about $50 million in today’s dollars. Compare that number to the hundreds of billions of dollars that Xerox directly earned from the laser printer. [caption id="attachment_25315" align="alignnone" width="500"] Xerox 9700 Printers[/caption] Although the exact number is hard to calculate, the work at PARC also unlocked trillions reaped by other technology-related businesses. One of the most vivid illustrations of the central role that Xerox played was a years-later interchange between Steve Jobs and Bill Gates. In response to Jobs’ accusation that Microsoft was stealing ideas from the Mac, Gates tells him:
Well, Steve, I think there’s more than one way of looking at it. I think it’s more like we both had this rich neighbor named Xerox, and I broke into his house to steal the TV set and found out that you had already stolen it.
Kay cautions that his method is not a cookbook for invention. It is more like a power tool that needs to be wielded by skilled hands. It is also a method that has been greatly enabled by Kay and his colleagues’ inventions. Beyond the technology industry that they helped spawned, their inventions also underscore discovery and innovation in every field of science and technology, including chemistry, biology, engineering, health and agriculture. Information technology is not only a great invention; it has reinvented invention. It powers the favorable exponential curves upon which other inventors can escape the present and invent the future. See also: How We’re Wired to Make Bad Decisions For his part, Kay continues to lead research at the frontiers of computing, with a continued emphasis on human advancement. In addition to his Viewpoints Research Institute, he recently helped to formulate the Human Advance Research Community (HARC) at YC Research, the non-profit research arm of Y Combinator. HARC’s mission is “to ensure human wisdom exceeds human power, by inventing technology that allows all humans to see further and understand more deeply.” That is a future worth inventing.

The Brewing Crisis Over Jobs

This should be the most innovative decade in history, and must be if we’re going to avoid a Mad Max dystopia in favor of a Star Trek future.

sixthings

Everyone has heard the old anecdote about the frog in a pot of water. If the temperature is raised slowly, the frog won’t react, eventually allowing itself to get boiled. That’s where we’re heading as a country when it comes to technological advances and the threat they pose to millions of jobs. Seemingly every day, there are new stories in the media about artificial intelligence, data and robotics — and the jobs they threaten in retail, transportation, carrier transport and even the legal profession. Yet no one is jumping out of the pot. Let’s be clear: This is not science fiction. In just recent days, there have been articles on Amazon’s automation ambitions, described by the New York Times as “putting traditional retail jobs in jeopardy,” and on the legal profession bracing for technology taking over some tasks once handled by lawyers. As reported in Recode, a new study by the research firm PwC found that nearly four out of 10 jobs in the U.S. could be “vulnerable to replacement by robots in the next 15 years.” Many of those will be truckers, among the most common jobs in states across the country. See also: Why Trump’s Travel Ban Hurts Innovation   Yet when President Trump hosted truck drivers at the White House recently, he dedicated his remarks to the threat of healthcare without uttering a word about the advanced driverless semi fleets that will soon replace them. His Treasury Secretary Steven Mnuchin shockingly said in an interview last week that we’re “50 to 100 years” away from artificial intelligence threatening jobs. It’s easy for sensationalist headlines about AI to dominate, like those about Elon Musk’s warning that it poses an existential threat. Yet the attention of people such as Musk, Bill Gates and Stephen Hawking should be a signal to Trump and Mnuchin that AI and related robotics and automation are moving at a far faster clip than they are acknowledging. It should be on the administration’s radar screen, and officials should be jumping out of the boiling water. Solutions won’t come easy. Already some experts suggest a universal basic income will be necessary to offset the job losses. We also have to help our workforce make the transition. Educational institutions such as Miami-Dade College and Harvard University have introduced advanced programming courses that take students from zero to six programming languages on a fast track. More needs to be done. This should be the most innovative decade in human history, and it has to be if we’re going to avoid a Mad Max dystopia in favor of a Star Trek future. Of course, there are those who say similar warnings were raised as technology revolutionized agriculture and other industries along the way. They might argue that then, as now, those advances led to more jobs. We would all welcome that and the potential these changes will represent for improving lives. See also: Can Trump Make ‘the Cyber’ Secure?   Technological advances could greatly reduce the cost of living, make housing more affordable and solve some of the biggest challenges whether in energy or long-term care, an issue painfully familiar to so many families. It may also help improve quality of life in the long term, as men and women gain greater flexibility to spend time with loved ones rather than dedicating 40 or more hours a week to working and so many others commuting. In the near term, however, the job losses that are possible could inflict tremendous economic pain. We are far from where we need to be. That will continue to be the case until policymakers, educators and innovators come together to address the reality before us. We won’t solve this overnight, but we can’t afford to wait until it’s too late. This was written by Vivek Wadhwa and Jeff Greene.


Vivek Wadhwa

Profile picture for user VivekWadhwa

Vivek Wadhwa

Vivek Wadhwa is a fellow at Arthur and Toni Rembe Rock Center for Corporate Governance, Stanford University; director of research at the Center for Entrepreneurship and Research Commercialization at the Pratt School of Engineering, Duke University; and distinguished fellow at Singularity University.

What Your Broker Should Be Asking You

There are two main things you can do to ensure you get the proper business insurance coverage at the best rates for your company.

sixthings
When you’re running a business, insurance is one of those things that you know you have to have, but you don’t want to think about too much. There are so many different types of policies, state regulations and industry requirements that trying to determine what you need can be overwhelming, and that doesn’t even include trying to get the best rates once you settle on a policy. Fortunately, insurance brokers will manage all of that for you. But to do so, they need to know a lot about your business. While you may not want to know all the details and intricacies of securing the various business insurance policies that you need, you want to be sure the insurance broker handling things for you can get you comprehensive coverage at the best possible rates. What You Want to Hear The primary thing you want to hear from your broker is a lot of questions. The more details the broker has about the way your business is structured, the sorts of policies and procedures you have in place and the types of risks and exposures your workers encounter, the better able he is to find you the best policy matches. See also: No, Brokers Are Not Going Away   Of course, the kinds of questions the broker asks matters, too. Some of the basics may seem obvious, like how many worker’s compensation claims you’ve had in the past and what types of safety programs you have in place. Some other important questions might not seem relevant right away, however. Some examples of these include:
  • What are your hiring and HR practices?
  • What types of clients do you typically serve?
  • Exactly what kinds of lifting activities do your employees perform and how frequently?
  • Do you have a program in place to aid injured employees in returning to work?
  • How long have you been in business?
  • Do your employees ever have to work at great height or underground, and for how long?
  • What type of experience does your management team have?
  • Is your work seasonal?
  • What is your current loss control policy?
Not all of these questions may be necessary depending on the type of coverage you’re seeking, but in general, the more your broker asks about your business, the better. Why It Matters While it may be tedious at times, it’s important to provide your broker with as many details as possible in response to any questions asked. That’s because the methods for calculating coverages and premiums take into account a wide range of data that insurance companies use to determine your risk for various types of claims. Your broker will use the details about your company to develop a profile of your business, and then use that when comparing rates and options available through various insurers. Finding the best fit for you, and generally the lowest rates, is your broker’s goal, and to accomplish it, the broker needs all information that could potentially affect those rates, both positively and negatively. To ultimately determine your premiums, this information will be combined with data on accident frequency in your industry, state regulations and guidelines and experience modifiers, and the more you can give, the better the chance that your rate will be fair and appropriate for your risk. What You Can Do There are two main things you can do to ensure you get the proper business insurance coverage at the best rates for your company. The first one is to choose a knowledgeable and experienced broker that you have confidence in. You may want to speak to a few before settling on one, and a lot of what you should evaluate them on is the type of questions they ask about your business. See also: Will You Be the Broker of the Future?   You should also take the time to gather as much detailed information as possible in answer to those questions. The broker's job is to represent you to the insurance companies in the best possible light, and only you can provide the tools needed to do that.

Nick Roumi

Profile picture for user NickRoumi

Nick Roumi

Nick Roumi is founder and CEO of Radius Insurance. He was named twice on the prestigious INC 500 fastest growing businesses in America and was nominated as Entrepreneur-Of-The-Year by Ernest and Young. Roumi loves solving problems and enjoys helping fellow business owners tackle their risk and operational challenges.

The Need for Agile, Collaborative Leaders

Change has arrived in the insurance industry, and it has decided to stay a while and get comfortable. Leaders must adapt--constantly.

sixthings
Change has arrived in the insurance industry—and it has decided to stay a while and get comfortable. In this recap of a general session from The Institutes CPCU Society 2016 Annual Meeting, industry executives spoke about the ramifications of today’s landscape of change and the need for insurance and risk management professionals to stay on top of the latest technologies and technological issues, such as cyber risk, and to embrace ACE: agility, collaboration and education. The insurance and risk management industry is constantly evolving and, like many of the industries it insures, is currently at an introspective point, as change is all around. Much of this change is driven by the increasing use of technology, which is having a profound effect on businesses, individuals and society as a whole. Cyber risk, for example, is a major concern for insurers. With nearly half of insurance professionals planning to retire in the next 10 years, developing a new generation of leadership is essential. To be an insurance and risk management leader in today’s environment requires continuing education. That was the consensus of a panel of industry chief executive officers who discussed at The Institutes CPCU Society 2016 Annual Meeting in Hawaii emerging trends in insurance and risk management and how to be a leader in the modern work environment. I had the pleasure of moderating the panel, “CEO Conversations, Becoming a Leader,” which included Jeffrey Bowman, FCCA, senior adviser in Deloitte Insurance Consulting Practice and chairman of The Institutes’ Board of Trustees; Albert “Skip” Counselman, CPCU, chairman and chief executive officer of Riggs, Counselman, Michaels & Downes; Alan Krapf, CPCU, president of the Property and Casualty Insurance Group at USAA; and Christine Sears, CPA, CPCU, president and chief executive officer of Penn National Insurance. See also: Best Insurance? A Leadership Pipeline   All of the panelists have experience managing change in the industry and implementing new technologies, regulations and working practices. As great leaders themselves, they have helped others grow into leadership roles within their own organizations. They also serve as board members of The Institutes, which has given me the pleasure of knowing them for many years. The panelists agreed that for the industry and its professionals, honing critical thinking skills and maintaining knowledge of emerging issues—such as growing technology and data analytics—and then being able to use and apply that knowledge are critical to future success. Regardless of professionals’ comfort level with technology, lifelong learning about it, as well as about economics, societal changes and other new developments, is vital to the advancement of both their careers and the industry. Understanding New Technology In regard to new technology, the panelists noted that, though it can help facilitate communications, analysis and efficiency, it also poses a large risk. For example, Bowman said that understanding, preventing and insuring cyber risk is a major concern that professionals are still trying to determine how to insure. Because it is evolving quickly, is very complicated and has many elements, “nobody really has this right at the moment,” he said, adding that companies also have to be aware of third-party risks: “It brings in a whole realm of issues around compliance, regulation and governance that everybody has to be aware of.” Counselman noted that cyber crime does not discriminate, but affects everyone: individuals, large businesses and small businesses. “You can buy insurance, you can transfer the risk, but transferring the risk isn't the entire answer,” he said. “What's really the answer is being vigilant and educated, learning and trying to stay one step ahead. And that's the message we have to get across, because just as we thought about fire insurance and general liability insurance for years and years as being the mainstay of what we were doing and telling our clients about, this cyber risk can shut down a client and put a client out of business very quickly if the appropriate safeguards aren't enforced.” Chief among corporate cyber risks is reputational risk. Krapf said: “It's not just about protecting the data and the financials. It’s also about the brand. How do I protect the reputation of my company, too?” Sears added that reputational damage from cyber crimes can cause billions of dollars in damage. “What is really key is that you have a plan in place for when that happens,” she said. “And so, all of us should have a crisis management plan in place so we know that when it happens—because it really is more a matter of when—we know exactly what the processes are that we're going to follow.” Accordingly, she said, companies should have a plan in place to quickly handle a public relations crisis. ACE in the Hole: Remaining Agile, Collaborative and Educated The panelists all agreed that, to address the rapid changes in technology and other spheres, continuing education and agility are essential. “Really, what is happening today is a fourth industrial revolution: technology in the insurance industry,” Bowman said. “To deal with the changes that are coming in and the changes that have to happen within organizations, you have to have qualified staff.” Panelists also discussed how collaboration across departments is key to dealing with the fast pace of technological change. “To be successful in observing and understanding change, deciding what to do about change and implementing change, you need to collaborate today,” Counselman said. “You can’t just make your own plans within your one division or within your one department. You need to collaborate. You need to have input from people who might be involved on a daily basis in property-casualty coverages and risk management advice, in IT advice and financial planning. You need all of that, and you need to be effective at giving everyone the opportunity to understand the issue that you’re trying to approach and determine your strategy—and you need that input across divisions.” Diversity can enhance collaboration, the panelists asserted. With a diverse workforce comes diverse perspectives, which can aid in everything from product development, customer relationships and risk management. “Diversity lets you come up with richer and better decisions and allows you to come up with an answer that’s not just the answer that’s always been out there,” Krapf said. Allowing Professionals to Shine Part of facilitating collaboration across departments is the move to more decentralized organizations. Decentralized organizations are often flatter and less bureaucratic, thereby helping empower employees to be more involved in decision-making processes. Krapf added that institutional success further depends on a clear explanation of the mission. “You have to make sure you’re clear with all of your employees about what you are trying to accomplish and then let them make decisions.” To be well-equipped to make proper decisions in today’s rapidly changing landscape, insurance professionals must continue to learn. Gaining information and ensuring a solid understanding of that information are competitive advantages in the workplace. This idea was reinforced by Sears, who said, “Lifelong learning is absolutely what got me to the position that I’m in today.” With nearly half of insurance professionals expected to retire from the industry in the next decade, the industry needs insightful and capable new professionals. The good news for the industry, and specifically CPCUs, is that they have proven their commitment to lifelong learning and staying on top of industry issues. Changes in insurance, business and society present both opportunities and challenges for ensuring professional growth and leadership development and for grooming a generation of professionals with different working styles. From the panel’s perspective, insurance professionals are clearly going to have to work harder than ever to keep up with new developments and best practices and to develop creative solutions. This will enable them to thrive within the industry’s dynamic work environment and help the industry evolve. See also: Better Way to Think About Leadership   Looking out from my moderator’s chair at the hundreds of new and veteran CPCUs in the audience, meeting with many more at the CPCU Society Annual Meeting and interacting daily with members of the industry, I am optimistic about the future and excited about the opportunities in front of all of us. The insurance industry plays a vital role in making people’s lives easier. Insurance offers the promise that, if you pay your premiums, you will be protected from certain forms of catastrophic risk, thereby allowing you to engage in risk management. Through mutual trust, insurance also provides the peace of mind needed for families to buy a house or car, entrepreneurs to start a business and large companies to expand overseas. In this way, insurance helps oil the wheels of the economy. As holders of the industry’s premier designation, CPCUs are the insurance industry’s natural leaders and role models for continuing education. To this point, Counselman told attendees, “The most important thing you can do is commit yourself to lifelong learning. Getting your CPCU designation is only the beginning.” CPCUs should take great pride in their industry, hard work and accomplishments to date. There will be many opportunities ahead. I encourage CPCUs to raise their hands and seek these out. Find a mentor. And always keep learning.

IoT's Implications for Insurance Carriers

Insurance thrives on data -- and the Internet of Things offers brand new, high volumes of data to work with. The combination could be powerful.

sixthings
The Internet of Things can assist the insurance industry significantly because this industry thrives on data, and the IoT offers brand new, high volumes of data to work with. For example, Liberty Mutual is now offering connected smoke alarms to home insurance policyholders. Because these devices are connected to the internet, they can send alerts to policyholder cell phones if carbon monoxide or smoke is detected. These smoke alarms cost $99, but Liberty Mutual gives them to home insurance customers for free. If customers install them, they can save as much as 5% on their insurance premiums. This is because early notice about fires or carbon monoxide can significantly reduce the risk of losses. Connected smoke alarms are just one example of how home insurance is likely to change due to the IoT. In the future, more IoT devices in the house could lead to increased safety in the home and further reductions in insurance premiums. Other Uses for the IoT Connected IoT devices could also heavily affect auto insurance. In fact, they have already begun to do so. For example, Progressive Insurance has already started using something called “Progressive Snapshot.” Progressive Snapshot is a connected device that plugs into your OBD II port in your car. Once it is installed, it can monitor and record information about your driving habits. It then sends them to Progressive through wireless technology. If the Snapshot data shows that you are a good and a safe driver, then you can be rewarded with a lower premium. However, if the data reveals that you are a risky and an irresponsible driver, then your premium could go up. Progressive has already made more than 1.7 trillion driver observations with Snapshot. These observations are helping the insurer to find better and more customized insurance rates. Many auto insurers could soon follow Progressive’s lead. See also: 5 Predictions for the IoT in 2017   It is also possible that many more people could become safe drivers if they know that their driving behavior is being monitored by an IoT device. So, this could cause a reduction in accidents and traffic violations as well. IoT Outlook In addition to home and auto insurance, many other insurance niches will likely take advantage of the IoT. This is because the more accurate that an insurance company’s data is, the better it can price its products to both generate more customers, and reduce its risks. Health insurance, for example, could benefit strongly from the IoT. This is because wearable devices could help to monitor a person’s health. In fact, devices such as Fitbits and smart watches have already begun providing such data. See also: How to Reimagine Insurance With IoT   One source even claims that IoT devices could one day predict heart attacks or problems related to substance abuse. IoT devices could be used to send alerts in such circumstances. This could help to both save lives and reduce health insurance premiums. Essentially, the IoT is likely to become a huge asset to the insurance industry. It has the potential to make insurance better for both insurers, and customers, and it may also help to reduce injuries, property damage and other types of loss.

Robin Roberson

Profile picture for user RobinSmith

Robin Roberson

Robin Roberson is the managing director of North America for Claim Central, a pioneer in claims fulfillment technology with an open two-sided ecosystem. As previous CEO and co-founder of WeGoLook, she grew the business to over 45,000 global independent contractors.

Intelligent WC Medical Management

New ways to infuse technology and predictive analytics into the claims and medical management processes offer significant gains.

sixthings
Technology in workers’ comp is hardly new, but new ways to infuse technology and predictive analytics into the claims and medical management processes can significantly improve accuracy, efficiency, outcomes and, importantly, profitability. Well-designed technology that streamlines operational flow, provides key knowledge to the right stakeholders at the right time, promotes efficiency and generates measurable savings is formidable. The system is intelligent and includes these key components:
  1. Predictive analytics
  2. Data monitoring
  3. Knowledge for decision support
Predictive analytics Predictive analytics is the foundation for creating an intelligent medical management process. Analysis of historic data to understand the risks and cost drivers is the basis for an intelligent medical management system. For the risks identified, the organization sets its standards and priorities for which stakeholders are automatically alerted to those specific conditions in claims as they occur. The stakeholders are usually claims reps and nurse case managers, but others inside or outside the organization can be alerted, such as upper management or clients, depending on the situation and the organization’s goals. Upper management establishes specific action procedures for specified conditions or situations, thereby creating consistent procedures that can be measured against outcomes. See also: 25 Axioms Of Medical Care In The Workers Compensation System   Data monitoring Incoming data must be updated and monitored continuously. Random or interval monitoring leaves gaps in important claim knowledge that is overlooked until the next monitoring session. The damage may have escalated by then. With continuous data monitoring, everything is reviewed continually so nothing is missed. When the data in a claim matches the conditions outlined by the predictive modeling, an alert is sent to the stakeholder so action or intervention is initiated. Some say the stakeholders will not comply with such a structured program because they resist being directed. To solve that problem, accountability procedures in the form of audit trails in the system act as overseer. At any point, management can view what alerts have been sent, to whom they were sent, for what claim and for what reason, thereby observing participation and supporting accountability. Knowledge for decision support The alerts sent offer collected knowledge about the claim needing attention so the stakeholder is not forced to search for information before deciding upon an action. The reason the alert was triggered, detailed claim history including medical costs paid to date is displayed for alert recipients. Importantly, the projected costs for a claim with similar characteristics are portrayed, making reserving adjustments easy and accurate. The projected ultimate medical costs for the identified claim is portrayed for the claims rep based on the analytics, thereby providing decision support for adjusting reserves. Data entry into the system is never needed, therefore, accuracy and efficiency is optimized. At the same time, a nurse case manager is automatically notified of the situation if indicated by the organization’s rules in the system and is informed with the same claim detail. Now the case manager and claims rep are collaborating to mitigate the costs for this claim. They know the projected ultimate medical cost for the claim and the projected duration of the claim, so they have a common and concrete target to challenge. Moreover, improvements on the projections offer objective and defensible cost savings analysis. See also: Even More Tips For Building A Workers Compensation Medical Provider "A" Team   Predictive analytics combined with properly designed technology to create an intelligent medical management process establishes a distinct advantage. Knowledge made available at the appropriate time for the right people leads to efficiency and accuracy. Early, intelligent intervention drives better results.  Stakeholders coordinate efforts to mitigate the claim, working toward a shared goal. Finally, knowledge provided for decision-support positions for measurable, objective, reportable savings at claim closure.

Karen Wolfe

Profile picture for user KarenWolfe

Karen Wolfe

Karen Wolfe is founder, president and CEO of MedMetrics. She has been working in software design, development, data management and analysis specifically for the workers' compensation industry for nearly 25 years. Wolfe's background in healthcare, combined with her business and technology acumen, has resulted in unique expertise.

Modernizing Insurance Accounting -- Finally!

Instead of approaching accounting modernization as a compliance exercise, companies must see the broad range of impacts.

sixthings
Modernization of insurance accounting is finally here. The FASB issued its final guidance on enhanced disclosures for short duration contracts in May 2015 and published an exposure draft in September 2016 on targeted improvements to the accounting for long-duration contracts. After literally decades of deliberations, the IASB has completed its most recent exposure draft and plans to issue a final comprehensive accounting standard in the first half of 2017. Moreover, additional changes in the statutory accounting for most life insurance contracts are coming into effect; a company can elect to have Principles Based Reserving (PBR) effective on new business as early as Jan. 1, 2017. Companies have three years to prepare for PBR, with all new business issued in 2020 required to be valued using PBR. The impact of these regulatory changes is likely to be significant to financial reporting, operations and the business overall. Instead of approaching accounting modernization as a compliance exercise, companies instead should view the changes holistically, with an understanding that there will be impacts to systems, processes, profit profiles, capital, pricing and risk. Planning effectively and building a case for change can create efficiencies and enhanced capabilities that benefit the business more broadly. Financial reporting modernization will affect the entire organization, not just the finance and actuarial functions. Operations and systems; risk management; product development, marketing and distribution; and even HR will need to change. FASB’s Targeted Changes In May 2015, the FASB issued Accounting Standards Update (ASU) 2015-09, Disclosures about Short-Duration Contracts. Rather than changing the existing recognition and measurement guidance in U.S. GAAP for short-duration contracts, the FASB responded to views from financial statement users by requiring enhanced disclosures for the liability for unpaid claims and claims adjustment expenses. The disclosures include annual disaggregated incurred and paid claims development tables that need not exceed ten years, claims counts and incurred but not reported claim liabilities for each accident year included within the incurred claim development tables, and interim (as well as year-end) roll forwards of claim liabilities. The enhanced disclosures will be effective for public business entities for annual reporting periods beginning after Dec. 15, 2015 (i.e., 2016 for calendar year end entities) and interim reporting periods thereafter. The new disclosures may require the accumulation and reporting of new and different groupings of claims data by insurers from what is currently captured for U.S. statutory and other reporting purposes. Public companies are currently preparing now by making changes to existing processes and systems and performing dry runs of their processes to produce these disclosures. Non-public business entities will have a one-year deferral to allow additional time for preparation. See also: Who Is Innovating in Financial Services?   In September 2016, the FASB issued a proposed ASU on targeted improvements to the accounting for long-duration contracts. Proposed revisions include requiring the updating of cash flow assumptions and use of a high-quality fixed-income discount rate that maximizes the use of market observable inputs in calculating various insurance liabilities, simplifying the deferred acquisition costs amortization model and requiring certain insurance guarantees with capital market risk to be reported at fair value. The FASB also proposed enhanced disclosures, which include disaggregated roll forwards of certain asset and liability balances, additional information about risk management and significant estimates, input, judgments and assumptions used to measure various liabilities and to amortize deferred acquisition costs (“DAC”). No effective date was proposed, and transition approaches were provided with the recognition that full retrospective application may be impracticable. IASB to issue a new comprehensive standard The IASB’s journey to a final, comprehensive insurance contracts standard is nearly complete. After reviewing feedback from field testing by selected companies in targeted areas, the IASB completed its deliberations in November 2016. The IASB staff is proceeding with drafting IFRS 17 (previously referred to as IFRS 4 Phase II) with a proposed effective date of Jan. 1, 2021. Three measurement models are provided for in the standard: 1) Building Block Approach (“BBA”); 2) Premium Allocation Approach (“PAA”); and 3) the Variable Fee Approach (“VFA”). The default model for all insurance contracts is the BBA and is based on a discounted cash flow model with a risk adjustment and deferral of up-front profits through the Contractual Service Margin (CSM). This is a current value model in which changes in the initial building blocks are treated in different ways in the P&L. Changes in the cash flows and risk adjustment related to future services are recognized by adjusting the CSM, whereas those related to past and current services flow to the P&L. The CSM amortization pattern is based on the passage of time and drives the profit recognition profile. The effect of changes in discount rates can either be recognized in other comprehensive income (OCI) or P&L. The IASB has also allowed for the use of the PAA for qualifying short-term contracts, or those typically written by property and casualty insurers. This approach is similar to an unearned premium accounting for unexpired risks with certain differences such as deferred acquisition costs offsetting the liability for remaining coverage rather than being reflected as an asset. The claims liability, or liability for incurred claims, is measured using the BBA without a CSM. Discounting of this liability for incurred claims would be required, except where a practical expedient applies for contracts in which claims are settled in one year or less from the incurred date. Similar to the BBA model, the effect of changes in discount rates can either be recognized in other comprehensive income (“OCI”) or the P&L. The VFA is intended to be applied to qualifying participating contracts. This model was subject to extensive deliberations, considering the prevalence of such features in business issued by European insurers. This model recognizes a linkage of the insurer’s liability to underlying items where the policyholders are paid a substantial share of the returns, and a substantial proportion of the cash flows vary with underlying items. The VFA is the BBA model but with notable differences in treatment including the changes in the insurer’s share of assets being recognized in the CSM, accretion of interest on CSM at current rates, and P&L movements in liabilities mirroring the treatment on underlying assets with differences in OCI, if such an option is elected. The income statement will be transformed significantly. Rather than being based on premium due or received, insurance contract revenue will be derived based on expected benefits and expenses, allocation of DAC and release of the CSM and risk adjustment. The insurance contracts standard also requires substantial disclosures, including disaggregated roll forwards of certain insurance contract assets and liability balances. Forming a holistic strategy and plan to address accounting changes will promote effective compliance, reduce cost and disruption, and increase operational efficiency, as well as help insurers create more timely, relevant, and reliable management information. Statutory accounting: The move to principles-based reserving The recently adopted Principles-Based Reserving (“PBR”) is a major shift in the calculation of statutory life insurance policy reserves and will have far-reaching business implications. The former formulaic approach to determining policy reserves is being replaced by an approach that more closely reflects the risks of products. Adoption is permitted as early as 2017 with a three-year transition window. Management must indicate to their regulator if they plan on adopting PBR before 2020. PBR’s primary objective is to have reserves that properly reflect the financial risks, benefits and guarantees associated with policies and also reflect a company’s own experience for assumptions such as mortality, lapses and expenses. The reserves would also be determined assessing the impact under a variety of future economic scenarios. PBR reserves can require as many as three different calculations based on the risk profile of the products and supporting assets. Companies will hold the highest of the reserve using a formula-based net premium reserve and two principle-based reserves – a Stochastic Reserve (SR) based on many scenarios and a Deterministic Reserve (DR) based on a single baseline scenario. The assumptions underlying principles-based reserves will be updated for changes in the economic environment, for changes in company experience and for changes in margins to reflect the changing nature of the risks. A provision called the “Exclusion Tests” allows companies the option of not calculating the stochastic or deterministic reserves if the appropriate exclusion test is passed. Reserves under PBR may increase or decrease depending on the risks inherent in the products. PBR requirements call for explicit governance over the processes for experience studies, model inputs and outputs and model development, changes and validation. In addition, regulators will be looking to perform a more holistic review of the reserves. Therefore, and as we noted in the 2015 edition of this publication, it is critical that:
  • The PBR reserve process is auditable, including the setting of margins and assumptions, performing exclusion tests, sensitivity testing, computation of the reserves and disclosures;
  • Controls and governance are in place and documented, including assumption oversight, model validation and model risk controls; and
  • Experience studies are conducted with appropriate frequency and a structure for sharing results with regulators is developed.
PBR will introduce volatility to life statutory reserving, causing additional volatility in statutory earnings. Planning functions will be stressed to be able to forecast the impact of PBR over their planning horizons because three different reserve calculations will need to be forecast. There is no “one size fits all” approach to addressing the FASB’s and IASB’s changes. Each company will likely be starting from a different place and may have different goals for a future state. Implications A company’s approach to addressing these changes can vary depending on a variety of factors, such as the current maturity level of its IT architecture and structure, potential impact of proposed changes on earnings emergence and regulatory capital and current and planned IT and actuarial modernization initiatives. In other words, there is no “one size fits all” approach to addressing these changes. Each company likely will be starting from a different place and may have different goals for a future state. A company should invest the time to develop a strategic plan to address these changes with a solid understanding of the relevant factors, including similarities and differences between the changes. In doing so, companies should keep in mind the following potential implications: Accounting & Financial Reporting
  • Where accounting options or interpretations exist, companies should thoroughly evaluate the implications of such decisions from a financial, operational and business perspective. Modeling can be particularly useful in making informed decisions, identifying pros and cons and facilitating decisions.
  • Financial statement presentation, particularly in IFRS 17, could change significantly. Proper planning and evaluation of requirements, presentation options, granularity of financial statement line items and industry views will be essential in building a new view of an insurer’s financial statements.
  • Financial statement disclosures could increase significantly. Requirements such as disaggregated roll forwards could result in companies reflecting financial statement disclosures, investor supplements and other external communications at lower levels than previously provided.
  • Change is not limited to insurance accounting. Other areas of accounting change include financial instruments, leasing and revenue recognition. For example, the impact of changes in financial instruments accounting will be important in evaluating decisions made for the liability side of the balance sheet.
See also: The Defining Issue for Financial Markets   Operational
  • Inherent in each of these accounting changes is a company’s ability to produce cash flow models and use data that is well-controlled. Companies should consider performing a current state assessment of their capabilities and leverage, to the extent possible, infrastructure developed to comply with other regulatory changes such as Solvency II and ORSA and identify where enhancements or new technology is needed.
  • Given the increased demands on technology, computing and data resources that will be required, legacy processes and systems will not likely be sufficient to address pending regulatory and reporting changes. However, this creates an opportunity for these accounting changes to possibly be a catalyst for finance and actuarial modernization initiatives that did not historically have sufficient business cases and appetite internally for support.
  • As these accounting changes are generally based on the use of current assumptions, there will be an increased emphasis on the ability to efficiently and effectively evaluate historical experience on products by establishing new or enhancing existing processes. Strong governance over experience studies, inputs, models, outputs and processes will be essential.
  • As complexity increases with the implementation of these accounting changes, the impact on human resources could be significant. Depending on how many bases of accounting a company is required to produce, separate teams with the requisite skill sets may be necessary to produce, analyze and report the results. Even where separate teams are not needed, the close process will place additional demands on existing staff given the complexity of the new requirements and impact to existing processes. Companies may want to consider a re-design of their close process, depending on the extent of the impacts.
Business
  • Product pricing could be affected as companies consider the financial impacts of these accounting changes on profit emergence, capital and other internal pricing metrics. For instance, the disconnect of asset yields from discounting used in liabilities under U.S. GAAP and IFRS could result in a different profit emergence or potentially create scenarios where losses exist at issuance.
  • Companies may make different decisions on asset and liability matching or choose to hedge risk on products differently. Analysis should be performed to understand changes in the measurement approach with respect to discount rates and financial impacts of guarantees such that an appropriate strategy can be developed.
  • The move to accounting models where both policyholder behavior and market-based assumptions are updated more frequently will likely result in greater volatility in earnings. Management reporting, key performance indicators, non- GAAP measures, financial statement presentation and disclosure and investor materials will need to be revisited such that an appropriate management and financial statement user view can be developed.
  • The impact from a human resources perspective should not be underestimated. Performance-based employee compensation plans that are tied to financial metrics will likely need to change. Employees will also need to receive effective training on the new accounting standards, processes and systems that will be put in place.
Forming a holistic strategy and plan to address these changes will promote effective compliance, reduce cost and disruption and increase operational efficiency, as well as help insurers create more timely, relevant and reliable management information. Given the pervasive impact of these changes, it is important that companies put in place an effective governance structure to help them manage change and set guiding principles for projects. For example, this involves the development of steering committees, work streams and a project management office at the corporate and business group level that can effectively communicate information, navigate difficult decisions, resolve issues and ensure progress is on track. Each company has a unique culture and structure, therefore governance will need to be developed with that in mind to ensure it works for your organization. Companies that do not plan effectively and establish effective governance structures are likely to struggle with subpar operating models, higher capital costs, compliance challenges and overall lack of competitiveness.

Richard de Haan

Profile picture for user RicharddeHaan

Richard de Haan

Richard de Haan is a partner and leads the life aspects of PwC's actuarial and insurance management solutions practice. He provides a range of actuarial and risk management advisory services to PwC’s life insurance clients. He has extensive experience in various areas of the firm’s insurance practice.