Download

Atlanta: The Ripening Silicon Peach

Atlanta's low costs, universities and growing population of Fortune 1,000 companies is making it a fast-growing tech and insurance hub.

When evaluating the beginnings of established tech markets in the U.S., there are several similarities about their regional characteristics that can serve as indicators for their tech trajectory. Consider Palo Alto, New York and Seattle, also known as the centers of Silicon Valley, Silicon Alley and Silicon Forest, respectively. Each has unique advantages with its geographies, easy access to Millennial tech talent, attractive quality-of-life benefits and specialized technology roots.

The same pattern is beginning to emerge in Atlanta. Atlanta's combination of low cost of doing business, educational institutions and growing population of Fortune 1,000 companies is making it one of the fastest-growing tech hubs in the country. This year, Atlanta was ranked among the top 10 tech talent markets with a 21% growth in tech jobs since 2010, according to the latest CBRE report.

One driver in Atlanta’s recent economic and tech growth is the infiltration of insurance. The insurance industry is undergoing a tech transformation of its own, and of late several of the industry's leading insurance companies have set up shop in the region.

Let's take a look at how we got here.

Tech Market Drivers

In addition to a prominent business ecosystem - Georgia is home to 20 Fortune 500 headquarters and 33 Fortune 1,000 companies - Atlanta's tech surge is largely fueled by its world-class universities, which emphasize technology specialization and diversity, and its reputation as an attractive work-life destination.

Like Silicon Valley's beginnings with tech recruits from local Stanford University, Atlanta's midtown is walking distance from two respected universities, Georgia Institute of Technology and Georgia State. Georgia Tech is currently ranked seventh in the nation among public universities, and its college of engineering is consistently ranked in the nation's top five. Georgia State is ranked fifth in the nation for its risk management and insurance program. The vast pool of graduate talent each year is a huge attraction for start-ups and Fortune 1,000 companies alike.

Atlanta's universities are also known for their emphasis on diversity. Georgia Tech is consistently rated among the top universities with high graduation rates of underrepresented minorities in engineering, computer science and mathematics. This has transcended the universities into the region's broader tech community -- Atlanta is ranked as one of the top five states for women-owned businesses, with a 132% growth rate from 1997 to 2015, according to the U.S. Census Bureau.

As far as work-life attractiveness, Atlanta has been named the "top city people are moving to" by Penske for the last five years, because of the range of job opportunities, low cost of living and appealingly warm weather. Similarly, according to the job website Glassdoor, Atlanta was named one of the top 10 cities for someone to be a software engineer.

Insurance Intersect

The insurance industry is one of the key drivers of economic growth in the country -- and it is establishing major roots in the Atlanta region. Just in the last few years, Atlanta has seen a number of insurance companies relocate their head offices to the south. Recently, State Farm announced the addition of 3,000 jobs over the next 10 years, and MetLife just announced a significant investment in Midtown, choosing this area for its proximity to rapid transport and the international airport. Where Atlanta is situated, travelers can reach 90% of the U.S. in fewer than three hours.

Insurance growth in the region is also likely linked to the density and size of insurance claims on the East Coast, with the largest insurance providers located along the corridor from Boston down to Miami. The 10 most costly hurricanes in the U.S. history have hit the East Coast, and four have greatly affected Georgia.

2015 and Beyond

Looking ahead, I expect the majority of insurance companies to increase their visibility in Atlanta, as they'll find a wider pool of insurance experts and other advantages that cater to the industry's growth. Similar to the tech hubs ahead of it, Atlanta will continue taking advantage of its geography, access to talent and cultural ideals to not only build its tech community but to also push the insurance industry forward. The U.S. will soon have another major tech hub to be proud of.


Neil Snowdon

Profile picture for user NeilSnowdon

Neil Snowdon

Neil Snowdon is an experienced leader of successful multi-national development and professional services teams on U.S. and international projects. He has high-level experience at helping turn around faltering business relationships and restoring profitability. Snowdon is vice president of development at Vertafore, the market-leading provider of software as a service and on-premise solutions for the insurance industry.

Telematics: Now a 'Movie,' Not 'Snapshot'

Telematics is moving rapidly toward PHYD (pay how you drive) and is starting to incorporate a wide range of variables.

The traditional underwriting of an auto policy is based on a snapshot of certain static variables that belong to the client and his vehicle - the impact and weight on the pricing is determined by the analysis of the claims historical series of the company - and the renewal comes after taking the same type of snapshot after 12 months.

Telematics is becoming more and more used as a way of changing this approach and going more toward an individual pricing of risks, which uses a "movie" of the client's driving: Already today, more than half of the products that have a black box and that are present on the Italian market have a usage-based (UBI) tariff. (The rest of these products do not have any variable component linked to telematics information, only an up-front flat discount.)

The ways in which telematics data can be used within the tariff mechanism fall into three main categories:

  1. Telematics can be seen as an option on the existing tariff or as a stand-alone product;
  2. The client's value proposition can be "real individual pricing" applied during the first year, or a fixed discount for the first year and the "promise of a discount" at renewal based on the driving behavior in the previous 12 months;
  3. Variables can be incorporated within the tariff, either referring only to the distance traveled ("pay as you drive") or can also take into consideration a wider range of data regarding the driving behavior ("pay how you drive").

Pay as you drive (PAYD)

This type of product prices based on distance traveled and represents the most commonly used UBI tariff approach on the Italian market: around 80% of UBI products currently use a "kilometer" approach.

This approach focuses on a pretty wide niche and is based on a discount created especially for those clients who don't use their cars often: For example, California-based Metromile starts from the client's profile - based on traditional static variables - to determine the monthly fixed cost and the fee per kilometer; then, each month, Metromile measures with the box the "amount of risk exposure" (number of kilometers) and charges accordingly. Product innovation is moving toward assigning a different importance to the kilometers traveled based on the time of day and the type of road.

Looking at the PAYD solutions in Italy, in 30% of the cases telematics is an option on traditional policies and in 40% of the cases there is some form of adjustment of the premium during the first year.

Pay how you drive (PHYD)

This approach exploits the true telematics potential to define the adequate price for each client, based not only on the "amount of exposure to risk" but the "real level of risk," based on actual driving behavior. PHYD also brings major benefits by influencing driving behavior and by allowing for the acquisition and retention of less risky customers. In addition, insurers can switch from a niche approach to one that can be applied to the whole portfolio. Studies show that the ability to discriminate about risk is highly elevated. The 10% of clients that are identified as riskiest on the base of behavioral telematics account for 40% of total claims, while identifying the riskiest 10% based on traditional variables usually intercepts only 25% to 30% of claims.

A very interesting example is the policy launched recently by Direct Assurance (Axa Group) in France: The product includes a self-installing telematics box that is sent to the client's home. The client's cost is adjusted from month to month based on her driving behavior (it may vary between plus 10% and minus 50% with respect to the first month's premium).

The range of variables that are considered is wide. They start with traveled kilometers (having a different weight based on road type, time of day, day of the week and weather conditions). They move on to the intensity and length of braking, cornering and acceleration; respecting of speed limits; time spent behind the wheel; familiarity with the roads; and any use of the mobile phone while driving.

It becomes clear how the growth of this type of solution - which today still represents only a small part of the millions of telematics insurance policies that are in circulation worldwide - will make the ability to extract insights from big data the key element of the competition among insurance companies.

This article originally appeared in the Insurance Daily n. 738 Edition.

Dinner With Warren Buffett (Part 3)

Buffett's letters to shareholders include these five bits of timeless advice. One: Never downsize your underwriters during slowdowns.

|||

This is the third article in this series. You can read the other two parts here: Part 1 and Part 2.

In reading Warren Buffett's letters to shareholders, we found many gems that inform our opinions and beliefs about how to be successful in the insurance industry. We wanted to share these five pieces that we found key:

1. Be great in your niche rather than a generalist

"[A great insurance manager] follows the policy of sticking with business that he understands and wants, without giving consideration to the impact on volume," Buffett wrote.

Developing an area of expertise and choosing target markets the company understands in-depth is essential to the success of a carrier or of an agency. This is particularly relevant today as new risks are emerging quickly. Attempting to cover a risk you don't fully understand is a fool's game.

niche

2. Underwriting discipline is key for long-term success

"We hear a great many insurance managers talk about being willing to reduce volume in order to underwrite profitably, but we find that very few actually do so," Buffett wrote.

While it is common to talk about writing business only with strict underwriting criteria, it is hard to avoid the siren song of growth, especially for publicly traded carriers that have to worry about investors who only care about next quarter. Though it is challenging, a good underwriter must practice discipline in choosing the risks that it insures. Profitable companies will understand that this may mean growing more slowly or less than the year before but will ensure profitability.

3. Never downsize your underwriters during slowdowns

"We don't engage in layoffs when we experience a cyclical slowdown at one of our generally profitable insurance operations. This no-layoff practice is in our self-interest. Employees who fear that large layoffs will accompany sizable reductions in premium volume will understandably produce scads of business through thick and thin (mostly thin)," Buffett wrote.

In his own companies, Buffett professes to never downsize underwriters because of slowdowns in the market Rather, he prefers to keep the extra capacity to be ready to pounce once the market comes around and the business can be written at proper pricing with expected underwriting profitability. If a company wants to commit to profitability, employees must understand that their first priority is profitability. The best way to do this is to make it clear that employees will be rewarded when this profitability is achieved. Assuring employees that they are not in danger of being laid off because of slow growth is an effective signal. In addition, it will be important to manage hiring practices during periods of growth, so that the company is not overstaffed. The company must strive for efficiency during all cycles.

donw

4. Understand the challenges of commoditization and regulation

"Insurance companies offer standardized policies which can be copied by anyone. Their only products are promises. It is not difficult to be licensed, and the rates are an open book. There are no important advantages from trademarks, patents, location, corporate longevity, raw material sources, etc., and very little consumer differentiation to produce insulation from competition. It is commonplace, in corporate annual reports, to stress the difference the people make. Sometimes this is true and sometimes it isn't. But there is no question that the nature of the insurance business magnifies the effect which individual managers have on company performance," Buffett wrote.

The fact that the industry is so stringently regulated is a challenge for insurers. It is important, therefore, to hire people who are committed to professional development and growth. If the people in your company are going to be the difference, they must believe in the industry and strive every day to do the best work they can. Supporting your employees' development efforts will inspire a strong culture of growth and achievement.

5. Reserve conservatively

"We are making every effort to get our reserving right. If we fail at that, we can't know our true costs. And any insurer that has no idea what its costs are is heading for big trouble. [...] The natural tendency of most casualty-insurance managers is to underreserve, and they must have a particular mindset - which, it may surprise you, has nothing to do with actuarial expertise - if they are to overcome this devastating bias. Additionally, a reinsurer faces far more difficulties in reserving properly than does a primary insurer," Buffett wrote.

Companies must recognize and support the need to have an accurate picture of their costs. To this end, a company should encourage its claims departments to strive for accuracy and report potential losses fairly. If a company does not know what it faces, it cannot set goals that will lead to success. Insurance is an unusual business in that we do not know our cost of goods sold until long after the pricing has been set and the policy has been sold; thus, proper reserving is of do-or-die importance.

Much of Buffett's advice in these five statements centers on the importance of well-educated and knowledgeable employees, which is one of the key things we push for at InsNerds.


Tony Canas

Profile picture for user TonyCanas

Tony Canas

Tony Canas is a young insurance nerd, blogger and speaker. Canas has been very involved in the industry's effort to recruit and retain Millennials and has hosted his session, "Recruiting and Retaining Millennials," at both the 2014 CPCU Society Leadership Conference in Phoenix and the 2014 Annual Meeting in Anaheim.

Home Insurers Ignore Opportunity in Flood

U.S. inland flood insurance is an untapped market unlike any in the world -- but requires that insurers change their thinking.

|
Recently, Munich Re announced its plan to step into the U.S. inland flood market to offer a competitive flood coverage endorsement for participating carriers. This is the second notable entry of international capital into an arena dominated by the federal government. Munich Re is known as a conservative giant of international reinsurance, so it might seem odd that it is joining the National Flood Insurance Program (NFIP) in covering U.S. flood. A quick look at the opportunity shows why the plan makes sense. U.S. inland flood insurance is an untapped source of non-correlated premium unlike any other in the world. The market is dominated by an incumbent market maker that is in trouble because it offers an inferior product that cannot price risk correctly (this paper nicely summarizes the problems at NFIP). So, here is what the new entrants are seeing:
  1. Contrary to industry beliefs, flood is insurable. The tools are present to accurately segment risk.
  2. Carriers offering flood capacity will differentiate themselves from competitors. This will give them a leg up on the competition in a market that is highly homogeneous. Carriers not offering flood will likely disappear.
  3. The market is massive, with potentially 130 million homes and tens of billions of dollars at stake.
Let’s go into details. Capital Into a Ripe Market The U.S. Flood Market As most readers of Insurance Thought Leadership already know, many carriers have flood on the drawing board right now. The Munich Re announcement was not really a surprise. We all know there will be more announcements coming soon. Let’s summarize the market reasons for the groundswell of private insurance in U.S. flood. The most obvious characteristic of the market is the size. For the sake of this post, we’ll just consider homes and homeowner policies. Whether one considers the number of NFIP policies in force as the market size (about 5.4 million policies in 2014), the number of insurable buildings (133 million homes) or something in between, there is clearly a big market. And the NFIP presents itself as the ideal competitor – big, with a mandate not necessarily compatible with business results. So, there is no doubt that a market exists. Can it be served? Yes, because the risk can be rated and segmented. Low-Risk Flood Hazard To be clear: A low-risk-flood property has a profile with losses estimated to be low-frequency and low-severity. In other words: Expected flood events would rarely happen, and not cause much damage if they do. For many readers, joining the words “low-risk” and “flood” together is an oxymoron. We strongly disagree. Common sense and technology can both illustrate how flood risk can be segmented efficiently and effectively into risk categories that include “low.” Let’s start with common sense. Flood loss occurs because of three possible types of flood: coastal surge, fluvial/river or rain-induced/pluvial (here is more information on the three types of flood). The vast majority of U.S. homeowners are not close enough to coastal or river flooding to have a loss exposure (here is a blog post that explores the distribution of NFIP policies). Thus, the majority of American homeowners are only exposed to excess surface water getting into the home. We’d be willing to wager that most of the ITL readership does not purchase flood insurance, simply because they don’t need it. That is the common-sense way of thinking of low-risk flood exposure. How does the technology handle this? There is software available now that can be used to identify low-risk flood locations (as defined by each carrier), supported by the necessary geospatial data and analytics. Historically, this was not the case, but advances in remote sensing and computing capacity (as we explored here) make it entirely reasonable now, with location-based flood risk assessment the norm in several European countries. Distance to water, elevations, localized topographical analyses and flood models can all be used to assess flood risk with a high degree of confidence. In fact, claims are now best used as a handy ingredient in a flood score rather than as a prime indicator of flood risk. How to Deliver Flood Insurance in the U.S. Deliver Flood Insurance to What Kind of Market? Readers must be wondering at the size of market, because we offered two distinctly different possibilities above – is it about 5 million to 10 million possible policies, or 130 million policies? The difference is huge – the difference is between a niche market and a mass market. The approach taken by flood insurers thus far is for a niche market. The current approach probably has long-term viability in high-risk flood, and the early movers that are now underwriting there are establishing solid market shares, cherry-picking from the NFIP portfolio. On a large scale, though, the insurance industry’s approach needs to be for a mass market. Here is a case study describing the mass market opportunity:
  1. The property is in Orange County, CA, where the climate is temperate and dry, almost borderline desert. El Niño might be coming, but that risk can be built in.
  2. Using InsitePro (see image below), you can see that the property is miles and miles away from any coastal areas, rivers or streams. More importantly, the home is elevated against its surroundings, so water flows away from the property, which is deemed low-risk.
  3. The area has no history of flooding, and this particular community has one of the most modern drainage systems in the state.
map Screenshot of InsitePro, courtesy of Intermap Technologies. FEMA zones in red and orange
  1. Using Google Maps street view, we can estimate that the property is two to three feet above street level, which adds another layer of safety. Also, this view confirms that the area is essentially flat, so the property is not at the bottom of a bathtub.
  2. And, as with most homes in California, this property has no basement, so if water were to get into the house it would need to keep rising to cause further damage.
To an underwriter, it should be clear that this home has minimal risk from flooding. As a sanity check, she could compare losses from flood for this property (and properties like it in the community) to other hazards such as fire, earthquake, wind, lightning, theft, vandalism or internal water damage. How do they compare? What are the patterns? For this specific home, the NFIP premium for flood coverage is $430, which provides $250,000 in building limit and $100,000 in contents protection. The price includes the $25 NFIP surcharge. This is a mind-boggling amount of premium for the risk imposed. Consider that for roughly the same price you can get a full homeowners policy that covers all of these perils: fire, earthquake, wind, lightning, theft AND MORE! It is crazy to equate the risk of flood to the risk of all those standard homeowner perils, combined! We provided this example to show that even without all the mapping and software tools available for pricing, what we can quickly conclude is that the NFIP pricing for these low-risk policies is absurdly high. Whatever the price “should” be for these types of risks, can you see that it MUST be a fraction of the price of a traditional homeowner’s policy? Don’t believe that either? Consider that the Lloyd’s is marketing its low-risk flood policies as “inexpensive,” and brokers tell us privately that many base-level policies will be 50% to 75% less expensive than NFIP equivalents. The news gets even better. There are tens of millions of houses like this case example, with technology now available to quickly find them. These risks aren’t the exception; these risks can be a market in their own right. Let the mental arithmetic commence! Summary: Differentiate or Die! The Unwanted Commodity Most consumers of personal lines products don’t have the time or the ability to evaluate an insurance policy to determine whether it provides good value. Regrettably, most agents and brokers don’t have the time to help them either. So, when shopping for a product that they hope they will never use and that they are incapable of truly understanding, consumers will focus on the one thing they do understand: price. Competing on price becomes a race to the bottom (yay! – another soft market) and to death. But there is an opportunity here - carriers that compete on personal lines/homeowner insurance with benefits that are immediately apparent (like value, flexibility, service, conditions and, inevitably, price) have a rare chance to stake out significant new business, or to solidify their own share. The flood insurance market is real, and it’s big enough for carriers to establish a healthy and competitive environment where service and quality will stand out, along with price. Carriers that would like to avoid dinosaur status can remain relevant and competitive, with no departure from insurance fundamentals – rate a risk, price it and sell it. It’s obvious, right? Which carriers will be decisive and bold and begin to differentiate by offering flood capacity? Which carriers will evolve to keep pace or even lead the pack into the next generation of homeowner products? More importantly, which of you will lose market share and cease to exist in 10 years because you didn’t know what innovation looks like?

Nick Lamparelli

Profile picture for user NickLamparelli

Nick Lamparelli

Nick Lamparelli has been working in the insurance industry for nearly 20 years as an agent, broker and underwriter for firms including AIR Worldwide, Aon, Marsh and QBE. Simulation and modeling of natural catastrophes occupy most of his day-to-day thinking. Billions of dollars of properties exposed to catastrophe that were once uninsurable are now insured because of his novel approaches.


Ivan Maddox

Profile picture for user IvanMaddox

Ivan Maddox

Ivan Maddox is a geospatial engineer who for 20 years has been solving problems with location-based solutions for a variety of industries, including geophysical, governmental, telecommunications, and, now, insurance.

How Google Thinks About Insurance

Google's Project Sunroof, which uses exceptionally sophisticated mapping data, may show how Google will approach insurance.

For those of us wondering what Google plans to do in insurance -- isn't that all of us? -- it's worth looking at the company's Project Sunroof. The project uses exceptionally sophisticated mapping data to determine which homeowners would most benefit from solar panels and, in the process, may provide some insight into how Google is approaching insurance. To me, there are two key aspects of Project Sunroof. The first is that Google is taking a bottom-up approach that could inform a lot of decisions about insurance (while insurers traditionally go top-down). The second is that Google is being unusually smart about combining layers of information -- some proprietary, some in the public domain; some new, some long-available -- to produce what my frequent co-author Chunka Mui has, with a little help from me, labeled "emergent knowledge." ("Big data" is the term commonly used, but data isn't very interesting, while knowledge is. And the size of the database doesn't matter. What matters is using developments in technology to look in the right places to find the right data to answer the right questions so that revelations emerge.) Top-down vs. bottom-up Insurers typically start with pools of risk. They're getting much more sophisticated about subdividing those pools into ever smaller groups, but the thinking is still along the lines of "drivers without moving violations who travel 11,000 to 12,000 miles a year in generally suburban conditions." Insurers will keep getting more and more specific and produce more and smaller pools but are still going from the top down. Now look at Project Sunroof. Google is modeling the world in three dimensions and using that model to generate information house by house based on totally personalized criteria: on the square footage on the roof that would be available for solar panels, on the amount of cloud cover that is expected to obscure the sun above that house, on the effectiveness of the sunlight that will hit the roof (incorporating calculations based on temperature and on how the angle of the sun changes each day) and on any shade that would be cast on those panels from other structures. Although the article doesn't say so, I assume Google calculates potential savings on solar based on the rates of each local utility. In any case, there are no pools in sight for Google -- unless you want it to tell you about those in the backyards. That same model of the world could be the basis for a house-by-house, car-by-car, person-by-person approach to insurance for Google. And, if this approach works, Google will gain the sort of information advantage that has proved to be almost impossible to overcome. Even the largest insurers would have a hard time spending the money that Google has to map the U.S. by having cars drive every single street to take pictures and collect data, by making a series of acquisitions of data providers and by employing a small army of people to manually fix errors and update maps -- and Google would still have a years-long head start in developing its model of the world. Microsoft has thrown billions of dollars at search engines, but even Microsoft couldn't overcome the fact that Google's dominant share meant it was always learning and improving faster than Microsoft's Bing. Apple's map services were ridiculed, by comparison with Google's, when Apple launched them in September 2012. Apple is now at least in Google's ballpark on mapping, but no insurer can come close to Apple's resources -- a $646 billion market valuation and $202.8 billion of cash in the bank. That's "billion," with a "b." Emergent knowledge Google obviously begins with a huge asset because of its prescient decision years ago to map the entire U.S. and because of the recent work that has made that map 3D.  But Google is also taking data wherever it can get it. I know from some work I did at the Department of Energy in 2010 that national maps of sunlight have been available for years, and they have surely become far more detailed as the interest in solar power has spread, so I assume Google didn't have to generate those maps on its own. Temperature maps are also in the public domain. (Especially high or low temperatures degrade the performance of solar panels.) Those maps will become increasingly granular as they incorporate data from smartphones and other widely used devices that can act as sensors -- temperature will no longer be what the weather station reports from the Detroit airport; temperature will be known house by house. Overhead photos from satellites and, in some cases, drones are widely available, so Google can use those to check square footage of roofs, to see which direction the solar panels would point and so on. Google can collect information on rates from state utility commissions, where utilities have to make regular filings. It's easy to imagine Google layering similar types of information onto its map of the world for insurance purposes. In response to the federal Data.gov initiative, governments at all levels are making more information available digitally, so Google could incorporate lots of data about where and when accidents occur, where break-ins happen, where and when muggings occur and so on. Google could incorporate private work that is taking a 3D approach to flood risk (whether your house is three feet higher or lower than the average in a neighborhood can make all the difference) and is being much more discriminating about earthquake risk. Google could add information, from public or private sources, on the age of homes, type of pipes used, appliances, etc. to flesh out its understanding of the risks in homes. And, of course, Google will have lots of very precise information of its own to add to its model of the world, based on, for instance, what it knows about where your smartphone is and can infer about where you park your car, where and when you drive, etc. Once you take all this information and map it to such a precise model, there will surely be some non-obvious and highly valuable insights. WWGD: What Will Google Do? Looking at Project Sunroof still doesn't say a lot about how Google will attack insurance. Will it just sell increasingly targeted and valuable ads? Will it sell leads? Will it become a broker? Will it do more? But I think it's safe to say that, whatever Google does, its starting point will the most sophisticated model of the world -- and that model will always be improving.

Paul Carroll

Profile picture for user PaulCarroll

Paul Carroll

Paul Carroll is the editor-in-chief of Insurance Thought Leadership.

He is also co-author of A Brief History of a Perfect Future: Inventing the Future We Can Proudly Leave Our Kids by 2050 and Billion Dollar Lessons: What You Can Learn From the Most Inexcusable Business Failures of the Last 25 Years and the author of a best-seller on IBM, published in 1993.

Carroll spent 17 years at the Wall Street Journal as an editor and reporter; he was nominated twice for the Pulitzer Prize. He later was a finalist for a National Magazine Award.

New Leaders in Race for the Killer App

If the industry isn’t currently innovating, new competitors entering the market are! There are dozens of examples of potential killer apps.

Last week, we looked at how insurers have missed opportunities, multiple times, to become real innovators and follow through on developing the new killer apps. We took as our inspiration, the book, The New Killer Apps: How Large Companies Can Out-Innovate Start-ups, by Chunka Mui and Paul B. Carroll. This week, we’re looking forward. Is the insurance industry thinking big enough? Is it starting small? Is it learning fast by experimenting, testing and learning from failures? If the industry isn’t currently innovating, new competitors entering the market are! There are dozens of examples. Consider Climate Corp. and its re-imagination of crop insurance using mobile, cloud and emergent knowledge. There is Google with its aggregator site, Google Compare, allowing drivers to search for the best car insurance deal, creating an interesting and formidable channel. Then there is Peers, a nonprofit developing products for those participating in the sharing economy. These aren’t just new ways to sell. In many cases, they are groundbreaking models of protection to respond to new models of life. The rapid experimentation and expansion of social media or peer-to-peer-based insurance companies, introduced by Friendsurance in Germany, now includes new companies like InsPeer in France and Guevara in the UK. Guevara pools customers’ premiums online to save money, and any money left in the pool at the end of the year stays with the group and lowers everyone’s price the next year. Will automotive companies be next with the emergence of connected and autonomous cars? Where are the 21st century affinity models going to lead us? So how do established insurers out-innovate, let alone compete? How do they become this generation’s emerging insurance leaders? They must look beyond long-held traditional views and become accustomed to major business shifts. Instead of transforming the business using age-old assumptions and traditional business models, insurers must look to reinvent the business model, not unlike how Uber reinvented the taxi model or how the new entrants are reinventing insurance. Starting at the Core: What Is Insurance? Todays’ insurers may be tempted to look at surface issues, just like they did in 1998. How do we sell differently? How do we empower our agents? How do we touch consumers? But to understand the real reinvention of insurance, consider how confusing the concept of insurance has become to the consumer. The insurance industry sells a product that may be legally required, such as auto insurance. The product may not really be deemed to be a necessity, such as life insurance. To the consumer, premiums seem to simply evaporate down a hole, instead of accruing like mutual funds. And many consumers would suggest insurance isn’t even a “product” at all! If an insurer has to spend too much time educating and selling AND it lags behind in digital technology and excellent customer experience, how long do you think it will last when a new competitor comes along and offers a new model of engagement that is simple and easy to understand (and uses the latest technology)? Insurers need to erase the white board and begin with a redefinition of insurance. How do we protect people in all of the new ways that they live while using digital technology to actually make the experience of buying, owning and using the product we sell more appealing? There are start-ups and venture capital looking to “uberize” insurance to be that new competitor and disrupt insurance. But traditional insurers still have an edge. Start-ups and "unicorns" (those challenging industry disrupters with billion-dollar pre-IPO values) don’t have the knowledge of pricing, profitability and regulatory requirements or a base of customers. This is where insurance as an established industry can compete and out-innovate. To do so, we must embrace the digital revolution. We must turn long-held business assumptions upside down. We must reinvent the business model. We should embrace trends and experiment. And we must effectively answer the consumer’s question, “What is insurance, and why is it important to me?” Instead of implementing modern core insurance systems on-premise and converting your existing business, look to cloud options where you can experiment and build innovative products, new channels and reach new market segments fast and less expensively. Look to your partners to provide an ecosystem of options and capabilities that will help you meet existing customer needs and reach new customers. Look to your peers to see if you can partner or collaborate, leveraging the assets of both organizations in new ways. By doing so, you can find innovative ways to create a business model outside the traditional business, seek new market segments and new regions through insurance as a service model and reach more customers with alternative channels. Insurance business models, assumptions and practices of the past decades and centuries are less durable in today’s game-changing marketplace. Competitive dominance is no longer achieved by operational efficiency, lower prices, massive advertising, large internal systems or channel loyalty. It is achieved by anticipating trends and pivoting quickly to create and capture the economic and competitive opportunity. To win on the final lap (the one that counts), insurers need to make moves today that will position them as emerging leaders. In this race, there is no one path that moves you across the finish line. There is no singular destination. There is only a world of possibilities. So think big! Start small. Act fast. Unicorns beware. We’re coming for you in this final lap!

Denise Garth

Profile picture for user DeniseGarth

Denise Garth

Denise Garth is senior vice president, strategic marketing, responsible for leading marketing, industry relations and innovation in support of Majesco's client-centric strategy.

6 Lessons From Katrina, 10 Years On

Katrina shows we must reduce exposure, especially by helping residents in low-income areas invest in measures to lower their losses.

In December 2005, just three months after Katrina savaged the Gulf Coast, we edited On Risk and Disaster, a book on the key lessons that the storm so painfully taught. The book was very different from most of the post-mortems that focused on the country’s lack of preparedness for the storm’s onslaught. It focused sharply on how to reduce the risk of future disasters—and how to understand how to help those who suffer most from them. One of the most important findings highlighted by the book’s 19 expert contributors was that the storm affected lower-income residents far more than others. Reducing the exposure to potential damage before disasters occur, especially in the most hazard-prone areas, is one of the most important steps we can take. To achieve this objective in low-income areas, residents often need help to invest in measures to reduce their losses. Failing to learn these lessons will surely lead to a repeat of the storm’s awful consequences. Now, 10 years after Katrina struck, six lessons from the book loom even larger. 1. Disasters classified as low-probability, high-consequence events have been increasing in likelihood and severity. From 1958 to 1972, the number of annual presidential disaster declarations ranged between eight and 20. From 1997 through 2010, they ranged from 50 to 80. The National Oceanic and Atmospheric Administration reported that the number of severe weather events—those that cause $1 billion in damage or more—has increased dramatically, from just two per year in the 1980s to more than 10 per year since 2010. That trend is likely to continue. 2. Most individuals do not purchase insurance until after suffering a severe loss from a disaster—and then often cancel their policies several years later. Before the 1994 Northridge earthquake in California, relatively few residents had earthquake insurance. After the disaster, more than two-thirds of the homeowners in the area voluntarily purchased coverage. In the years afterward, however, most residents dropped their insurance. Only 10% of those in seismically active areas of California now have earthquake insurance, even though most people know that the likelihood of a severe quake in California today is even higher than it was 20 years ago. Moreover, most homeowners don’t keep their flood insurance policies. An analysis of the National Flood Insurance Program in the U.S. revealed that homeowners typically purchased flood insurance for two to four years but, on average, they owned their homes for about seven years. Of 841,000 new policies bought in 2001, only 73% were still in force one year later, and, after eight years, the number dropped to just 20%. The flood risk, of course, hadn’t changed; dropping the policies exposed homeowners to big losses if another storm hit. 3. Individuals aren’t very good at assessing their risk. A study on flood risk perception of more than 1,000 homeowners who all lived in flood-prone areas in New York City examined the degree to which people living in these areas assessed their likelihood of being flooded. Even allowing a 25% error margin around the experts’ estimates, most people underestimated the risk of potential damage; a large majority of the residents in this flood-prone area (63%) underestimated the average damage a flood would cause to their house. It is likely that “junk science,” including claims that climate change isn’t real, leads many citizens to problems in assessing the risks they face. 4. We need more public-private partnerships to reduce the costs of future disasters. Many low-income families cannot afford risk-based disaster insurance and often struggle to recover from catastrophes like Katrina. One way to reduce future damages from disasters would be to assist those in hazard-prone areas with some type of means-tested voucher if they invest in loss-reduction measures, such as elevating their home or flood-proofing its foundation. The voucher would cover both a portion of their insurance premium as well as the annual payments for home-improvement loans to reduce their risk. A program such as this one would reduce future losses, lower the cost of risk-based insurance and diminish the need for the public sector to provide financial disaster relief to low-income families. 5. Even if we build stronger public-private partnerships, individuals expect government help if they suffer severe damage. Just before this spring’s torrential Texas rains, there was a huge battle in the Texas state legislature about whether local governments ought to be allowed to engage in advance planning to mitigate the risks from big disasters. Many of the forces trying to stop that effort were among the first to demand help when floodwaters devastated the central part of the state. Even the strongest believers in small government expect help to come quickly in times of trouble. We are a generous country, and we surely don’t want that to change. But jumping in after disasters strike is far more expensive than taking steps in advance to reduce risks. Everyone agrees that the cost curve for disaster relief is going up too fast and that we need to aggressively bend it back down. 6. Hurricanes tend to grab our attention—but there are other big risks that are getting far less attention. Hurricanes are surely important, but winter storms, floods and earthquakes are hugely damaging, too. Too often, we obsess over the last catastrophe and don’t see clearly the other big risks that threaten us. Moreover, when big disasters happen, it really doesn’t matter what caused the damage. Coast Guard Adm. Thad Allen, who led the recovery effort after Katrina, called the storm “a weapon of mass destruction without criminal intent.” The lesson is that we need to be prepared to help communities bounce back when disasters occur, whatever their cause; to help them reduce the risk of future disasters; and to be alert to those who suffer more than others. The unrest that rocked Baltimore following Freddie Gray’s death reminds us that Adm. Allen’s lesson reaches broadly. The riots severely damaged some of the city’s poorest neighborhoods and undermined the local economy, with an impact just as serious as if the area had been flooded by a hurricane. Many of the same factors that bring in the government after natural disasters occurred here as well: a disproportionate impact on low-income residents, most of whom played no part in causing the damage; the inability to forecast when a random act, whether a storm surge or a police action, could push a community into a downward spiral; and the inability of residents to take steps before disasters happen to reduce the damage they suffer. Conclusion Big risks command a governmental response. Responses after disasters, whatever their cause, cost more than reducing risks in advance. Often, the poor suffer the most. These issues loom even larger in the post-Katrina years. Natural disasters have become more frequent and more costly. We need to develop a much better strategy for making communities more resilient, especially by investing—in advance—in strategies to reduce losses. We need to pay much more attention to who bears the biggest losses when disasters strike, whatever their cause. We need to think about how to weave integrated partnerships involving both government and the private and nonprofit sectors. And we need to understand that natural disasters aren’t the only ones our communities face. Sensible strategies will require a team effort, involving insurance companies, real estate agents, developers, banks and financial institutions, residents in hazard-prone areas as well as governments at the local, state and federal levels. Insurance premiums that reflect actual risks coupled with strong economic incentives to reduce those risks in advance, can surely help. So, too, can stronger building codes and land use regulations that reduce the exposure to natural disasters. If we’ve learned anything in the decade since Katrina, it’s that we need to work much harder to understand the risks we face, on all fronts. We need to think about how to reduce those risks and to make sure that the least privileged among us don’t suffer the most. Thinking through these issues after the fact only ensures that we struggle more, pay more and sow the seeds for even more costly efforts in the future. This article was first published on GovEx and was written with Donald Kettl and Ronald J. Daniels. Kettl is professor of public policy at the University of Maryland and a nonresident senior fellow at the Brookings Institution and the Volcker Alliance. Daniels is the president of Johns Hopkins University.

Howard Kunreuther

Profile picture for user HowardKunreuther

Howard Kunreuther

Howard C. Kunreuther is professor of decision sciences and business and public policy at the Wharton School, and co-director of the Wharton Risk Management and Decision Processes Center.

7 Ways Your Data Can Hurt You

Data can be your biggest asset in workers' comp, but you must reexamine all the processes you use to collect, analyze and report it.

Your data could be your most valuable asset, and participants in the workers’ compensation industry have loads available because they have been collecting and storing data for decades. Yet few analyze data to improve processes and outcomes or to take action in a timely way. Analytics (data analysis) is crucial to all businesses today to gain insights into product and service quality and business profitability, and to measure value contributed. But processes need to be examined regarding how data is collected, analyzed and reported. Begin by examining these seven ways data can hurt or help. 1. Data silos Data silos are common in workers’ compensation. Individual data sets are used within organizations and by their vendors to document claim activity. Without interoperability (the ability of a system to work with other systems without special effort on the part of the user) or data integration, the silos naturally fragment the data, making it difficult to gain full understanding of the claim and its multiple issues. A comprehensive view of a claim includes all its associated data. 2. Unstructured data Unstructured documentation, in the form of notes, leaves valuable information on the table. Notes sections of systems contain important information that cannot be readily integrated into the business intelligence. The cure is to incorporate data elements such as drop-down lists to describe events, facts and actions taken. Such data elements provide claim knowledge and can be monitored and measured. 3. Errors and omissions Manual data entry is tedious work and often results in skipped data fields and erroneous content. When users are unsure of what should be entered into a data field, they might make up the input or simply skip the task. Management has a responsibility to hold data entry people accountable for what they add to the system. It matters. Errors and omissions can also occur when data is extracted by an OCR methodology. Optical character recognition is the recognition of printed or written text characters by a computer. Interpretation should be reviewed regularly for accuracy and to be sure the entire scope of content is being retrieved and added to the data set. Changing business needs may result in new data requirements. 4. Human factors Other human factors also affect data quality. One is intimidation by IT (information technology). Usually this is not intended, but remember that people in IT are not claims adjusters or case managers. The things of interest and concern to them can be completely different, and they use different language to describe those things. People in business units often have difficulty describing to IT what they need or want. When IT says a request will be difficult or time-consuming, the best response is to persist. 5. Timeliness There needs to be timely appropriate reporting of critical information found in current data. The data can often reveal important facts that can be reported automatically and acted upon quickly to minimize damage. Systems should be used to continually monitor the data and report, thereby gaining workflow efficiencies. Time is of the essence. 6. Data fraud Fraud finds its way into workers’ compensation in many ways, even into its data. The most common data fraud is found in billing—overbilling, misrepresenting diagnoses to justify procedures and duplicate billing are a few of the methods. Bill review companies endeavor to uncover these hoaxes. Another, less obvious means of fraud is through confusion. A provider may use multiple tax IDs or NPIs (national provider numbers) to obscure the fact that a whole set of bills are coming from the same individual or group. The system will consider the multiple identities as different and not capture the culprit. Providers can achieve the same result by using different names and addresses on bills. Analysis of provider performance is made difficult or impossible when the provider cannot be accurately identified. 7. Data as a work-in-process tool Data can be used as a work-in-process tool for decision support, workflow analysis, quality measurement and cost assessment, among other initiatives. Timely, actionable information can be applied to work flow and to services to optimize quality performance and cost control. Accurate and efficient claims data management is critical to quality, outcome and cost management. When data accuracy and integrity is overlooked as an important management responsibility, it will hurt the organization.

Karen Wolfe

Profile picture for user KarenWolfe

Karen Wolfe

Karen Wolfe is founder, president and CEO of MedMetrics. She has been working in software design, development, data management and analysis specifically for the workers' compensation industry for nearly 25 years. Wolfe's background in healthcare, combined with her business and technology acumen, has resulted in unique expertise.

Ransomware: Your Money or Your Data!

Ransomware, which "kidnaps" firms' or individuals' data and demands payment for its release, is becoming increasingly common.

|
Your client, ABC Corp. is going about its business and then gets this message: police
The above is a typical ransomware message, according to a recent Symantec Security Response report. What’s next? Pay the “ransom” and move on? Ransomware is a type of malware or malicious software that is designed to block access to a computer or computer system until a sum of money is paid. After executing ransomware, cyber criminals will lock down a specific computer or an entire system and then demand a ransom to unlock the system or release the data. This type of cyber crime is becoming more and more common for two reasons: 1. Cyber criminals are become increasingly organized and well-funded. 2. A novice hacker can easily purchase ransomware on the black market. According to the FBI, this type of cyber crime is increasingly targeting companies and government agencies, as well as individuals. The most common way that criminals execute their evil mission is by sending attachments to an individual or various personnel at a company. The busy executive opens the file, sees nothing and continues with his work day. However, once the file has been opened, the malware has been executed, and Pandora has been unleashed from the box! Now that the malware has been unleashed, a hacker can take over the company's computer system or decide to steal or lock up key information. The criminals then make a “ransom”demand on the company. The ransom is usually requested in bitcoins, a digital currency also referred to as crypto-currency that is not backed by any bank or government but can be used on the Internet to trade for goods or services worldwide. One bitcoin is worth about $298 at the moment. Surprisingly, the amounts are generally not exorbitant (sometimes as nominal as $500 to $5,000 dollars). The company then has the choice to pay the sum or to hire a forensics expert to attempt to unlock the system.
The best way companies can attempt to guard against such cyber crime attacks is by educating employees on the prevalence and purpose of malware and the danger of opening suspicious attachments. Employees should be advised not to click on unfamiliar attachments and to advise IT in the event they have opened something that they suspect could have contained malware. Organizations should also consider backing up their data OFF the main network so that, if critical data is held hostage, they have a way to access most of what was kidnapped. Best practices also dictate that company systems (as well as individual personal devices) be patched and updated as soon as upgrades are available. Finally, in the event you are a victim of a ransom attack, you would need to evaluate it constitutes a data breach incident. If the data hijacked is encrypted, notification is likely not necessary (as the data would be unreadable by the hacker). However, if the data was not encrypted, or you cannot prove to the authorities that it was, notification to clients or individuals is likely necessary. Takeaway
Cyber extortion is more prevalent than most people realize because such events are not generally publicly reported. To protect against this risk, we recommend that companies employ best practices with respect to cyber security and that they consider purchasing a well-tailored cyber policy that contains cyber extortion coverage. Such coverage would provide assistance in the event a cyber extortion threat is made against the company, as well as finance the ransom amount in the event a payment is made.

Laura Zaroski

Profile picture for user LauraZaroski

Laura Zaroski

Laura Zaroski is the vice president of management and employment practices liability at Socius Insurance Services. As an attorney with expertise in employment practices liability insurance, in addition to her role as a producer, Zaroski acts as a resource with respect to Socius' employment practices liability book of business.

Einstein’s Theory on Work Comp Outcomes

The measure for workers' comp outcomes is deeply flawed, yet, defying Einstein's definition of insanity, we keep using it.

|
Over the last decade, I have been fortunate to grow a physical therapy company that has provided hundreds of thousands of workers’ compensation visits. I have learned a great deal about the good, the bad and the ugly in our industry during this time. One lesson is most significant: The term “outcome” is among the most misused and misapplied phrases in workers’ compensation. Very few consultants, intermediaries and “experts” actually assess the value of their provider networks because they lack the ability to truly define an outcome. A Lesson From Einstein In 1905, Albert Einstein articulated the theory of relativity, including what must be the most recognizable formula in history: E = MC-squared. This theory has stood the test of time as an accurate reflection of the relationship between energy and mass. Similarly, in workers’ compensation, we have utilized a formula to allegedly explain the “theory of outcomes” as the relationship between cost and clinical performance. The problem is that our equation for outcomes is incomplete and inaccurate. Ask your intermediary how they rank healthcare providers. Inevitably, most will articulate some iteration of: Outcomes = Price x Utilization Or, O = PU (ironic when you say it out loud). But this is a false promise. This formula uses data derived only from claims and has no capacity to reflect the quality of the clinical performance. An effective definition of outcome incorporates price and incorporates utilization but must balance that analysis with an understanding of functional improvement. Without incorporating functional improvement into the equation, you have a useless collection of letters and numbers masquerading as a mechanism for transparency and clarity. O = PU is to the “theory of outcomes” what E = C-squared would be to the theory of relativity – partially expressed and therefore completely wrong. The Manipulation of O = PU Price can be manipulated. Utilization can be manipulated. Both are manipulated every day in our industry and will continue to be. However, providers can’t manipulate the functional improvement (or lack thereof) of their patients. Functional improvement either happens, or it doesn’t... and is directly related to the ability of the provider and his attention to and effort with the patient. There are occasional anomalies in patient outcomes, but I can tell you with absolute confidence that the past decade has taught me that, when balanced for patient type and clinical environment, the best clinicians consistently achieve the greatest functional improvement with their patients. I can also tell you that far too few “experts” seem to either understand or care about functional improvement. We live in an era now where many intermediaries ask providers to play “name that tune” when it comes to price and utilization. They guide providers to increase prices to offset discounts and manage utilization around a number (not the patient).
It is a game, and this game does not end well. Higher prices, bigger discounts, worse care, mismanaged utilization (unrelated to patient improvement), higher indemnity, animosity, deceit, ill will, poor communication and frustration. Sound familiar? The True Formula for Outcomes – The True Recipe for Value It really is this simple: Outcomes =   Price X Utilization / Functional Improvement If you truly want to evaluate outcomes, demand accountability and data regarding how patients improve functionally. Identify and benchmark what your injured patients could do before their injury and what they can do after their treatment. That analysis (and only that analysis) will lead you to truly identifying outcomes -- and truly understanding value. Everything else is a side show. Price and discount matter – if you know functional improvement. Utilization matters – if you know functional improvement. NONE OF IT matters without knowledge of functional improvement. Far too many “experts” have advised employers to “commoditize” the provision of healthcare through cost containment strategies that define outcomes through price and utilization only – and along the way have lost the battle for value. The Real Question Why do we so commonly resign ourselves to O = PU, when we know it is inaccurate? Workers’ compensation affords us the greatest opportunity for transparency in all of healthcare. Our “experts” should lead with new practices and analysis – not cut and paste outdated processes sourced from commercial health. We have the HIPAA waiver. We have greater referral control. We often have access to the essential functions of the job (benchmarks for return to work) and the injured employee’s pre-existing conditions (benchmarks for maximum medical improvement) when effectively identified through job-site assessments and post-offer testing. Last, we have a truly invested payer with comprehensive exposure to both medical and indemnity cost. An actual, accurate formula for outcomes isn’t just important, it is essential... and it is achievable. Almost as famous as Einstein’s equation is his quote that “the definition of insanity is doing the same thing over and over again and expecting a different result.” I believe, if Einstein were alive today, he would say, “If you are happy with the outcomes you are seeing in workers’ compensation, keep doing the same thing. However, after years of increased medical costs, rising negativity between employers and their injured employees, a tidal wave of provider frustration with payer rules and bureaucracy and a national increase in indemnity – my advice is to stop repeating the same ‘cost containment’ techniques that have never worked before. If you want more from your providers and intermediaries, O = PU is the definition of insanity.”

Matthew Condon

Profile picture for user MatthewCondon

Matthew Condon

Matt Condon has focused most of his start-up efforts in the field of healthcare services and “big data” technology.

His portfolio of startups includes ARC Physical Therapy+, Bardavon Health Innovations, RedefinePE.com and KTM2.