Download

To Go Big (Data), Try Starting Small

Despite big opportunities, big data has thus far been more of a big dilemma, especially for healthcare institutions. Here is a new approach.

sixthings
Just about every organization in every industry is rolling in data—and that means an abundance of opportunities to use that data to transform operations, improve performance and compete more effectively. "Big data" has caught the attention of many—and perhaps nowhere more than in the healthcare industry, which has volumes of fragmented data ready to be converted into more efficient operations, bending of the "cost curve" and better clinical outcomes. But, despite the big opportunities, for most healthcare organizations, big data thus far has been more of a big dilemma: What is it? And how exactly should we "do" it? Not surprisingly, we've talked to many healthcare organizations that recognize a compelling opportunity, want to do something and have even budgeted accordingly. But they can't seem to take the first step forward. Why is it so hard to move forward? First, most organizations lack a clear vision and direction around big data. There are several fundamental questions that healthcare firms must ask themselves, one being whether they consider data a core asset of the organization. If so, then what is the expected value of that asset, and how much will the company invest annually toward maintaining and refining that asset? Oftentimes, we see that, although the organization may believe that data is one of its core assets, in fact the company's actions and investments do not support that theory. So first and foremost, an organization must decide whether it is a "data company." Second is the matter of getting everyone on the same page. Big data projects are complex efforts that require involvement from various parties across an organization. Data necessary for analysis resides in various systems owned and maintained by disparate operating divisions within the organization. Moreover, the data is often not in the form required to draw insight and take action. It has to be accessed and then "cleansed"—and that requires cooperation from different people from different departments. Likely, that requires them to do something that is not part of their day jobs—without seeing any tangible benefit from contributing to the project until much later. The "what's in it for me" factor is practically nil for most such departments. Finally, perception can also be an issue. Big data projects often are lumped in with business intelligence and data warehouse projects. Most organizations, and especially healthcare organizations, have seen at least one business intelligence and data warehouse project fail. People understand the inherent value but remain skeptical and un-invested to make such a transformational initiative successful. Hence, many are reticent to commit too deeply until it's clear the organization is actually deriving tangible benefits from the data warehouse. A more manageable approach In our experience, healthcare organizations make more progress in tapping their data by starting with "small data"—that is, well-defined projects of a focused scope. Starting with a small scope and tackling a specific opportunity can be an effective way to generate quick results, demonstrate potential for an advanced analytics solution and win support for broader efforts down the road. One area particularly ripe for opportunity is population health. In a perfect world with a perfect data warehouse, there are infinite disease conditions to identify, stratify and intervene for to improve clinical outcomes. But it might take years to build and shape that perfect data warehouse and find the right predictive solution for each disease condition and comorbidity. A small-data project could demonstrate tangible results—and do so quickly. A small-data approach focuses on one condition—for example, behavioral health, an emerging area of concern and attention. Using a defined set of data, it allows you to study sources of cost and derive insights from which you can design and target a specific intervention for high-risk populations. Then, by measuring the return on the intervention program, you can demonstrate value of the small data solution; for example, savings of several million dollars over a one-year period. That, in turn, can help build a business case for taking action, possibly on a larger scale and gaining the support of other internal departments. While this approach helps build internal credibility, which addresses one of the biggest roadblocks to big data, it does have some limitations. There is a risk that initiating multiple independent small-data projects can create "siloed" efforts with little consistency and potential for fueling the organization's ultimate journey toward using big data. Such risks can be mitigated with intelligent and adaptive data architecture and a periodic evaluation of the portfolio of small-data solutions. Building the "sandbox" for small-data projects To get started, you need two things: 1) a potential opportunity to test and 2) tools and an environment that enable fast analysis and experimentation. It is important to understand quickly whether a potential solution has a promising business case, so that you can move quickly to implement it—or move on to something else without wasting further investment. If a business case exists, proceed to find a solution. Waiting to procure servers for analysis or for permission to use an existing data warehouse will cost valuable time and money. So that leaves two primary alternatives for supporting data analysis: leveraging Software-as-a-Service solutions such as Hadoop with in-house expertise, or partnering with an organization that provides a turnkey solution for establishing analytics capabilities within a couple of days. You'll then need a "sandbox" in which to "play" with those tools. The "sandbox" is an experimentation environment established outside of the organization's production systems and operations that facilitate analysis of an opportunity and testing of potential intervention solutions. In addition to the analysis tools, it also requires resources with the skills and availability to interpret the analysis, design solutions (e.g., a behavioral health intervention targeted to a specific group), implement the solution and measure the results. Then building solutions For building a small-data initiative, it is a good idea to keep a running list of potential business opportunities that may be ripe for cost-reduction or other benefits. Continuing our population health example, this might include areas as simple as finding and intervening for conditions that lead to the common flu and reduced employee productivity, to preventing pre-diabetics from becoming diabetics, to behavioral health. In particular, look at areas where there is no competing intervention solution already in the marketplace and where you believe you can be a unique solution provider. It is important to establish clear "success criteria" up front to guide quick "go" or "no-go" decisions about potential projects. These should not be specific to the particular small-data project opportunity but rather generic enough to apply across topics—as they become the principles guiding small data as a journey to broader analytics initiatives. Examples of success criteria might include: - Cost-reduction goals - Degree to which the initiative changes clinical outcomes - Ease of access to data - Ease of cleansing data so that it is in a form needed for analysis For example, you might have easy access to data, but it requires a lot of effort to "clean" it for analysis—so it isn't actually easy to use. Another important criterion is presence of operational know-how for turning insight into action that will create outcomes. For example, if you don't have behavioral health specialists who can call on high-risk patients and deliver the solution (or a partner that can provide those services), then there is little point in analyzing the issue to start with. There must be a high correlation between data, insight and application. Finally, you will need to consider the effort required to maintain a specific small-data solution over time. For instance, a new predictive model to help identify high-risk behavioral health patients or high-risk pregnancies. Will that require a lot of rework each year to adjust the risk model as more data becomes available? If so, that affects the solution's ease of use. Small-data solutions need to be dynamic and able to adjust easily to the market needs. Just do it Quick wins can accelerate progress toward realizing the benefits of big data. But realizing those quick wins requires the right focus—"small data"—and the right environment for making rapid decisions about when to move forward with a solution or when to abandon it and move on to something else. If in a month or two, you haven't produced a solution that is translating into tangible benefits, it is time to get out and try something else. A small-data approach requires some care and good governance, but it can be a much more effective way to make progress toward the end goal of leveraging big data for enterprise advantage. This article first appeared at Becker's Hospital Review.

Munzoor Shaikh

Profile picture for user MunzoorShaikh

Munzoor Shaikh

Munzoor Shaikh is a director in West Monroe Partners' healthcare practice, with a primary focus on managed care, health insurance, population health and wellness. Munzoor has more than 15 years of experience in management and technology consulting.

The State of Workers' Comp in 2016

Loss trends, stagnant interest rates, deteriorating reinsurance results and challenging regulatory issues are likely to have a negative impact.

sixthings
Over the last two years, employers and groups that self-insure their workers’ compensation exposures have enjoyed reasonably favorable terms on their excess insurance policies. Both premiums and self-insured retentions (SIRs) have remained relatively stable since 2014. This trend is likely to continue through 2016, but the long-term outlook for this line of coverage is less promising. Changing loss trends, stagnant interest rates, deteriorating reinsurance results and challenging regulatory issues are likely to have a negative impact on excess workers’ compensation insurance in the near future. Predictions for 2016 Little direct information is available on the excess workers’ compensation marketplace even though written premiums well exceed $1 billion nationwide. Accurately forecasting changes in the marketplace is largely a function of the prevalent conditions of the workers’ compensation, reinsurance and financial marketplaces. But, based on available information, premium rates, retentions and policy limits should remain relatively flat on excess workers’ compensation policies for the balance of the 2016 calendar year. This projected stability is because of four main factors: positive results in the workers’ compensation industry over the last two years, availability of favorable terms in the reinsurance marketplace, an increase in the interest rate by the Federal Reserve at the end of 2015 and continued investment in value-added cost-containment services by excess carriers. For calendar year 2014, the National Council on Compensation Insurance (NCCI) reported a 98% combined ratio for the workers’ compensation industry nationwide. In 2015, the combined ratio is projected to have improved slightly to 96%. This equates to a 2% underwriting profit for 2014 and a projected 4% underwriting profit for 2015. This is the first time since 2006 that the industry has posted positive results. The results were further bolstered by a downward trend in lost-time claims across the country and improved investment returns. Reinsurance costs and availability play a significant role in the overall cost of excess workers’ compensation coverage. On an individual policy, reinsurance can make up 25% or more of the total cost. Excess workers’ compensation carriers, like most insurance carriers, purchase reinsurance coverages to spread risk and minimize volatility generated by catastrophic claims and adverse loss development. Reinsurers have benefited from underwriting gains and improved investment returns over the last three years. These results have helped to stabilize their costs and terms, which have directly benefited the excess workers’ compensation carriers and, ultimately, the policyholders that purchase excess coverage. According to NCCI, the workers’ compensation industry has only posted underwriting profits in four of the last 25 years. This includes the two most recent calendar years. To generate an ultimate net profit and for the industry to remain viable on a long-term basis, workers’ compensation carriers rely heavily on investment income to offset the losses in most policy years. For the first time since 2006, the Federal Reserve increased target fund rates at the end of 2015. Although the increase was marginal, it has a measurable impact on the long-term investment portfolios held by workers’ compensation and excess workers’ compensation carriers. Workers’ compensation has a very long lag between the time a claim occurs and the date it is ultimately closed. This lag time is known as a “tail.” The tail on an excess workers’ compensation policy year can be 15, 20 and even as much as 30 years. An additional 0.25% investment return on funds held in reserve over a 20-plus-year period can translate into significant additional revenue for a carrier. Excess workers’ compensation carriers have moved away from the traditional model of providing only commodity-based insurance coverage over the last 10 years. Most have instead developed various value-added cost-containment services that are provided within the cost of the excess policies they issue. Initially, these services were used to differentiate individual carriers from their competitors but have since evolved to have a meaningful impact on the cost of claims for both the policyholder and the carrier. These services include safety and loss control consultation to prevent claims from occurring, predictive analytics to help identify problematic claims for early intervention and benchmarking tools that help employers target specific areas for improvement. These value-added services not only reduce the frequency and severity of the claims experience for the policyholder, but excess carriers, as well. Long Term Challenges The results over the last two years have been relatively favorable for the workers’ compensation industry, but there are a number of long-term challenges and issues. These factors will likely lead to increasing premiums or increases in the self-insured retentions (SIRs) available under excess workers' compensation policies. Loss Trends: Workers’ compensation claims frequency, especially lost-time frequency, has steadily declined on a national level over the last 10 years, but the average cost of lost-time claims is increasing. These two diverging trends could ultimately result in a general increase in lost-time (indemnity) costs. Further, advances in medical technology, treatments and medications (especially opioids) are pushing the medical cost component of workers’ compensation claims higher, and, on average, medical costs make up 60% to 70% of most workers’ compensation claims. Interest Rates: While the Federal Reserve did increase interest rates by 0.25 percentage point in late December, many financial analysts say that further increases are unlikely in the foreseeable future. Ten- year T-bill rates have been steadily declining over the last 25 years, and the current 10-year Treasury rate remains at a historically low level. A lack of meaningful returns on long-term investments will necessitate future premium increases, likely coupled with increases in policy retentions to offset increasing losses in future years. Reinsurance: According to a recent study published by Ernst & Young, the property/casualty reinsurance marketplace has enjoyed three consecutive years of positive underwriting results, but each successive year since 2013 has produced a smaller underwriting profit than the last. In 2013, reinsurers generated a 3% underwriting profit followed by a 2% profit in 2014 and finally an underwriting profit of less than 1% in 2015. Like most insurance carriers, reinsurers utilize investment income to offset underwriting losses. As the long-term outlook for investments languishes, reinsurance carriers are likely to move their premiums and retentions upward to generate additional revenue, thus increasing the cost of underlying policies, including excess insurance. Regulatory Matters: Workers’ compensation rules and regulations are fairly well-established in most states, but a number of recent developments at the federal and state levels may hurt workers’ compensation programs nationwide. The federal government continues to seek cost-shifting options under the Affordable Care Act (ACA) to state workers’ compensation programs. Later this year, state Medicaid programs will be permitted to recover entire liability settlements from state workers’ compensation plans – as opposed to just the amount related to the medical portion of the settlement. At the state level, there are an increasing number of challenges to the “exclusive remedy” provision of most workers’ compensation systems. Florida’s Supreme Court is currently deliberating such a challenge. Should the court rule in favor of the plaintiffs, Florida employers could be exposed to increased litigation from injured workers. A ruling against exclusive remedy could possibly set precedent for plaintiff attorneys to bring similar litigation in other states. Lastly, allowing injured workers to seek remedies outside of the workers’ compensation system would strip carriers and employers of many cost-containment options.

Vince Capaldi

Profile picture for user VinceCapaldi

Vince Capaldi

Vince Capaldi is the president of the Bay Oaks Wholesale Brokerage, a national wholesale insurance broker specializing in self-insured workers’ compensation programs. Capaldi has developed and maintained numerous individual and group self-insurance plans in both the public and private sectors nationwide.

$60 Billion Elephant in the Room

More than half of car accidents may now stem from phone-related distracted driving, according to a survey of agents -- a huge increase.

sixthings
Research has found that one in four car crashes is caused by phone-related distracted driving. However, a recent LifeSaver study of agents suggests this figure to be a vast understatement. More than 60% of agents responded that half or more of all claims are now related to distracted driving. It’s downright scary to think about the injuries, property damage and loss of life that results from distracted driving. If our survey bears out on a national scale, the full cost could be north of $60 billion a year. And, of course, this cost is passed on to drivers in the form of increased premiums. In fact, we’re already seeing some major insurers (GEICOAllstate and Zurich) publicly conceding that they are feeling the pain from this fast-growing epidemic. Assuming the annual cost to insurance companies ranges from $30 billion (if one in four accidents stems from phone-related distracted driving) to $60 billion (using the numbers from our research), a mere 10% reduction in distracted driving accidents would save insurance carriers and their customers several billion dollars annually, in addition to saving lives and drastically reducing injuries. The infographic below highlights the cost of distracted driving to the insurance industry. It also offers some insight into the minds of insurance agents receiving these claims, as well as the habits of today’s distracted drivers. Take a look and let us know your thoughts in the comments below. info

Ted Chen

Profile picture for user TedChen

Ted Chen

Ted Chen is recognized as a leader in building strategic partnerships for technology companies. Chen’s latest venture – LifeSaver – seeks to curb the human cost of distracted driving, while giving insurance companies powerful tools to encourage responsible driving.

New Approach to Risk and Infrastructure?

A P3 model (Public Private Partnership) can let governments invest in infrastructure while transferring risk in new ways.

sixthings
Globally, the World Economic Forum estimates that the planet is under-investing in infrastructure by as much as $1 trillion a year. Since 1990, for example, the global road network has expanded by 88%, but demand has increased by 218%. With the global population continuing to grow – and urban populations, in particular – the pressure on existing infrastructure is only set to worsen. And in the developed world, that infrastructure is creaking: In the U.K., 11 coal-fired power stations are nearing 50 years old, the end of their operational lives, and replacements have yet to be built; in the U.S., the average age of the country’s 84,000 dams is 52 years old; in Germany, a third of all rail bridges are more than 100 years old; parts of London’s Underground rail system, still in daily use by hundreds of thousands of commuters, run through tunnels that are more than 150 years old. According to the Report Card on America’s Infrastructure by the American Society of Civil Engineers (ASCE), the U.S. alone will need $3.6 trillion of infrastructure investment by 2020.  The report assigned near-failing grades to inland waterways and levees, and poor marks for the state of drinking water, dams, schools, road and hazardous waste infrastructure. Europe’s infrastructure is in worse shape. The Royal Institute of International Affairs has suggested that the continent needs $16 trillion of infrastructure investment by 2030, more than any other region in a world. Taxing Issues, Tragic Consequences While taxes once covered the cost of building and maintaining public infrastructure, entitlement programs such as Social Security and healthcare have started to claim a larger share of these funds as a percentage of government tax revenue, particularly as the number of people in retirement has expanded. In addition, as the cost of social programs grew, governments came under pressure to cut taxes, leaving even less money available to maintain existing infrastructure, let alone invest in the requirements of growing populations. “Too often infrastructure is seen only through the lens of cost, expenditure and not as core to society’s prosperity”, says Geoffrey Heekin, executive vice president and managing director, global construction and infrastructure, Aon Risk Solutions. “Since the 1950s, investment in infrastructure in developed countries has been declining,” he says. “In the U.S., for example, investment as a percentage of GDP has fallen from around 5% to 6% in the 1950s to around 2% today.” Tragically, train derailments, road closures, water main breaks and even bridge collapses have become commonplace. “Until situations like the water crisis in Flint or a bridge collapse happens, infrastructure does not hold proper weighting in the psyche of leaders in government,” Heekin says. This lack of attention to infrastructure is costing developed economies billions of dollars in lost productivity, jobs and competitiveness. Without addressing the infrastructure investment gap, the U.S. economy alone could lose $3.1 trillion in GDP by 2020, according to the ASCE, while one estimate attributes 14,000 U.S. highway deaths a year to poorly maintained road infrastructure. A Private Sector Solution to Public Sector Under-Investment? To begin reversing the infrastructure gap, it is likely that governments will need to find ways to encourage private sector investment toward replacing, renewing and upgrading physical infrastructure. Governments of all political stripes are increasingly supportive of private investment in infrastructure. One model that is now gaining attention is the Public Private Partnership (P3) model. P3s in one form or another have been used successfully in developed countries for several decades. They are being used to procure everything from public healthcare facilities, schools and courthouses to highways, port facilities and energy infrastructure. While the volume and type of P3 deal can vary widely by country, there continues to be an upward trend for the model’s use by the public sector. In 2015, for example, Canada procured 36% of its infrastructure with the P3 model. Aon Infrastructure Solutions anticipates that 21 P3 projects will close in Canada in 2016, with a total capital value of US$12.8 billion – the highest value of P3 projects in Canadian history. In the U.S., where adoption of the P3 model is less widespread, 11 projects are expected to close in 2016, with a capital value of US$8.7 billion. Like traditional design-bid-build procurement, P3 projects involve public authorities' putting public projects or programs up for competitive tender and selecting a preferred bidder from multiple consortia. The key difference is that the contractual structure in P3 allows the public authority to transfer a different set of risks to the private party – including (but not always) the financing for the project. The arrangement can allow the private partner that designs, builds and finances construction of the asset to operate and maintain it in return for either a share of the revenue generated by the use of the asset, or a stream of constant payments from the public authority (also called availability payments). Keeping Focused on the Big Picture “The public sector benefits from P3 delivery when the model is applied to a project that meets a community need and is procured through a transparent, accountable process,” says Gordon Paul – senior vice president, Aon Risk Solutions and member of Aon Canada’s Construction Services Group executive committee and Aon’s  global PPP Centre of Excellence. “Public authorities seek ‘value for money’ in a P3 project by looking to the long-term value,” Paul says. This means identifying whether the private sector party is able to design, build, finance, operate and maintain an infrastructure project for a price lower than if the public authority did it on its own over the same period. It’s about the full lifecycle of the project – not just the building costs. Taking a big picture view is equally important for the private sector party, says Alister Burley, head of construction for Aon Risk Services Australia. He points to the importance of taking a holistic view to P3 projects and investments to enable efficiencies to be built that will carry forward. If done right, P3 arrangements can be a significant benefit to both the public and private sectors. Public bodies gain a much-needed boost to their infrastructure, often with long-term maintenance included in the deal, reducing the potential negative economic and health consequences of infrastructure failure. And private investors can secure a stable, long-term return through a stake in some of the underlying essentials of our economies. Whatever route governments take to secure the integrity of our underlying infrastructure, one thing is clear – without a significant increase in infrastructure investment over the coming years, the world’s economy and health could well be put at further risk.

Tariq Taherbhai

Profile picture for user TariqTaherbhai

Tariq Taherbhai

Tariq Taherbhai is senior director at Aon Infrastructure Solutions, Aon’s global risk advisory group for alternative project delivery (APD)/public-private partnerships (PPP).

Where Will IoT Have Biggest Impact?

More than 300 insurers surveyed say the IoT's biggest impact will be on "behavior steering" among customers.

||
info See the full infographic here.

Marsha Irving

Profile picture for user Marsha Irving

Marsha Irving

Marsha Irving is the head of financial services for FC Business Intelligence. She advises the business on new opportunities and future direction through trend-spotting, in-depth market research and written analysis. Irving works across a variety of industry verticals developing new products, all the way from conception to launch.

Smart Homes Are Still Way Too Stupid

I'm cranky on the subject of the smart home because I've been hearing variations on this theme for 25 years without seeing a result.

sixthings

It's nice to know sharp people -- in this case, Rich Jaroslovsky, a former colleague at the Wall Street Journal who is now a vice president at SmartNews. He just wrote a takedown of the smart home that saved me the trouble.

I had visited the topic in a general way a year ago in an article taking issue  with something Google's executive chairman, Eric Schmidt, had said about how the Internet will disappear. My basic complaint about how even really smart people think about automation is that automation is often more trouble than it's worth and that people blithely assume I'd like to automate decisions that, in fact, I don't want automated -- no, I don't want my refrigerator ordering milk for me, my lights to always flip on a certain way when I walk through the door or my TV to always turn to ESPN when I wake up.

Recent stories about the glories of the smart home made me think I needed to return to the subject, more specifically this time -- I'm cranky on the subject of the smart home because I've been hearing variations on this theme for 25 years without seeing a result; no, Nest doesn't count. I was prompted into action when I received the following in an email this morning:

"Many large U.S. insurers are bracing for the impact of autonomous driving on their business, but they have yet to grasp that the same trend is at play in the homeowners and renters insurance markets. Insurers that don’t develop a value proposition around the connected home will be forced to give steeper discounts to reflect the lower risks without generating any strategic benefits. Savvy insurers that adapt to the new dynamic have a historic opportunity to become far more relevant than they are today.

"Based on over 100... discussions conducted between November 2015 and February 2016 with smart-home technology vendors; P&C, health, and life insurers; venture capital firms; and technology vendors, this report examines the connected-home use case for the insurance industry, profiles two turnkey smart-home... and mentions 147 other firms." [I deleted three corporate names in there, including the author of the report, because I don't see any need to make this personal, even though you're expected to pay real money for that report.]

Just when I was gearing up to write something on the smart home, though, I saw that Rich had posted his column, which begins:

"With every new smart device I add to my home, it gets a little dumber.

"The thermostats don’t talk to the lights. The security cameras don’t talk to the alarm system, which doesn’t talk to the garage door. The networked speakers talk to each other—but not to the TV sitting a few feet away. Just about every device has its own app for my smartphone, but since none of them work with each other, I’ve got 15 apps controlling 15 functions."

I encourage you to read the whole piece, especially if you harbor hopes that the smart home is a looming opportunity. As Rich notes, you can't have a connected home if the devices don't talk to each other. And while I may have a "standard" for communication, if Rich has a separate standard and so do 87 others of you, then we don't, in fact, have a standard way of communicating.

We'll get to the smart home.

But not soon.


Paul Carroll

Profile picture for user PaulCarroll

Paul Carroll

Paul Carroll is the editor-in-chief of Insurance Thought Leadership.

He is also co-author of A Brief History of a Perfect Future: Inventing the Future We Can Proudly Leave Our Kids by 2050 and Billion Dollar Lessons: What You Can Learn From the Most Inexcusable Business Failures of the Last 25 Years and the author of a best-seller on IBM, published in 1993.

Carroll spent 17 years at the Wall Street Journal as an editor and reporter; he was nominated twice for the Pulitzer Prize. He later was a finalist for a National Magazine Award.

When a Penalty Is Not a Penalty

The ACA creates a penalty for not purchasing health insurance -- but do the math. It's not really a penalty.

sixthings
The Affordable Care Act requires most Americans to buy qualifying health insurance coverage. Fail to comply with this mandate, and there’s a financial penalty waiting for you come tax time. But when is a penalty not a penalty? When is a mandate not a mandate? Hey, kids, let’s do some math. The penalty for going uninsured in 2016 is $695 per adult and $347.50 per child, up to a maximum of $2,085 or 2.5% of household income, whichever is greater. To determine the cost of coverage, we’ll use the second-lowest Silver plan available in a state. That’s the benchmark used to calculate ACA subsidies, and in 2015 Silver plans were roughly 68% of policies sold through an exchange. Even more important, I found a table showing the cost of the second-lowest cost Silver plan for 40-year-olds by state, but I couldn’t find a similar table for other levels. The least our 40-year-old could spend on the second-lowest Silver plan this year is $2,196, in New Mexico; the highest premium is $8,628, in Alaska. The median is $3,336. Divide the penalty by the premium, and you get 32% of the cheapest premium and 21% of the median premium. Put another way, paying the penalty saves our 40-year-old  consumer $1,500 in New Mexico and more than $2,600 in the mythical state of median. I did find a table showing the national average premium a 21-year-old would pay for a Bronze plan: $2,411.  In this situation, the $695 penalty amounts to just 29% of the policy’s cost, a savings of more than $1,700. The purpose of this post is not to encourage people to go uninsured. I think that’s financially stupid given the cost of needing health insurance coverage and not having it. And, personally, I support the individual mandate. I also understand the political obstacles to establishing a real penalty for remaining uninsured. However, I also believe the individual market in this country is in trouble. (More on this is a later post). Adverse selection is a contributing cause to this danger. The individual mandate is supposed to mitigate against adverse selection. The enforcement mechanism for that mandate, however, is a penalty that, for many people, is no penalty at all. That’s not just my opinion. That’s the math. A version of this article was originally posted on LinkedIn.

Alan Katz

Profile picture for user AlanKatz

Alan Katz

Alan Katz speaks and writes nationally on healthcare reform, technology, sales and business planning. He is author of the award-winning Alan Katz Blog and of <em>Trailblazed: Proven Paths to Sales Success</em>.

In Third Parties We (Mis)trust?

Mutual distributed ledgers, enabled by blockchain, are changing the nature of trust in financial services, with profound implications.

sixthings
Technology is transforming trust. Never before has there been a time when it’s been easier to start a distant geographical relationship. With a credible website and reasonable products or services, people are prepared to learn about companies half a world away and enter into commerce with them. Society is changing radically when people find themselves trusting people with whom they’ve had no experience, e.g. on eBay or Facebook, more than with banks they’ve dealt with their whole lives. Mutual distributed ledgers pose a threat to the trust relationship in financial services. The History of Trust Trust leverages a history of relationships to extend credit and benefit of the doubt to someone. Trust is about much more than money; it’s about human relationships, obligations and experiences and about anticipating what other people will do. In risky environments, trust enables cooperation and permits voluntary participation in mutually beneficial transactions that are otherwise costly to enforce or cannot be enforced by third parties. By taking a risk on trust, we increase the amount of cooperation throughout society while simultaneously reducing the costs, unless we are wronged. Trust is not a simple concept, nor is it necessarily an unmitigated good, but trust is the stock-in-trade of financial services. In reality, financial services trade on mistrust. If people trusted each other on transactions, many financial services might be redundant. People use trusted third parties in many roles in finance, for settlement, as custodians, as payment providers, as poolers of risk. Trusted third parties perform three roles:
  • validate – confirming the existence of something to be traded and membership of the trading community;
  • safeguard – preventing duplicate transactions, i.e. someone selling the same thing twice or "double-spending";
  • preserve – holding the history of transactions to help analysis and oversight, and in the event of disputes.
A ledger is a book, file or other record of financial transactions. People have used various technologies for ledgers over the centuries. The Sumerians used clay cuneiform tablets. Medieval folk split tally sticks. In the modern era, the implementation of choice for a ledger is a central database, found in all modern accounting systems. In many situations, each business keeps its own central database with all its own transactions in it, and these systems are reconciled, often manually and at great expense if something goes wrong. But in cases where many parties interact and need to keep track of complex sets of transactions they have traditionally found that creating a centralized ledger is helpful. A centralized transaction ledger needs a trusted third party who makes the entries (validates), prevents double counting or double spending (safeguards) and holds the transaction histories (preserves). Over the ages, centralized ledgers are found in registries (land, shipping, tax), exchanges (stocks, bonds) or libraries (index and borrowing records), just to give a few examples. The latest technological approach to all of this is the distributed ledger (aka blockchain aka distributed consensus ledger aka the mutual distributed ledger, or MDL, the term we’ll stick to here). To understand the concept, it helps to look back over the story of its development:  1960/'70s: Databases The current database paradigm began around 1970 with the invention of the relational model, and the widespread adoption of magnetic tape for record-keeping. Society runs on these tools to this day, even though some important things are hard to represent using them. Trusted third parties work well on databases, but correctly recording remote transactions can be problematic. One approach to remote transactions is to connect machines and work out the lumps as you go. But when data leaves one database and crosses an organizational boundary, problems start. For Organization A, the contents of Database A are operational reality, true until proven otherwise. But for Organization B, the message from A is a statement of opinion. Orders sit as “maybe” until payment is made, and is cleared past the last possible chargeback: This tentative quality is always attached to data from the outside. 1980/'90s: Networks Ubiquitous computer networking came of age two decades after the database revolution, starting with protocols like email and hitting its full flowering with the invention of the World Wide Web in the early 1990s. The network continues to get smarter, faster and cheaper, as well as more ubiquitous – and it is starting to show up in devices like our lightbulbs under names like the Internet of Things. While machines can now talk to each other, the systems that help us run our lives do not yet connect in joined-up ways. Although in theory information could just flow from one database to another with your permission, in practice the technical costs of connecting databases are huge. Worse, we go back to paper and metaphors from the age of paper because we cannot get the connection software right. All too often, the computer is simply a way to fill out forms: a high-tech paper simulator. It is nearly impossible to get two large entities to share our information between them on our behalf. Of course, there are attempts to clarify this mess – to introduce standards and code reusability to help streamline business interoperability. You can choose from EDI, XMI-EDI, JSON, SOAP, XML-RPC, JSON-RPC, WSDL and half a dozen more standards to “assist” your integration processes. The reason there are so many standards is because none of them finally solved the problem. Take the problem of scaling collaboration. Say that two of us have paid the up-front costs of collaboration and have achieved seamless technical harmony, and now a third partner joins our union, then a fourth and a fifth … by five partners, we have 13 connections to debug, by 10 partners the number is 45. The cost of collaboration keeps going up for each new partner as they join our network, and the result is small pools of collaboration that just will not grow. This isn’t an abstract problem – this is banking, this is finance, medicine, electrical grids, food supplies and the government. A common approach to this quadratic quandary is to put somebody in charge, a hub-and-spoke solution. We pick an organization – Visa would be typical – and all agree that we will connect to Visa using its standard interface. Each organization has to get just a single connector right. Visa takes 1% off the top, making sure that everything clears properly. But while a third party may be trusted, it doesn’t mean it is trustworthy. There are a few problems with this approach, but they can be summarized as "natural monopolies." Being a hub for others is a license to print money for anybody that achieves incumbent status. Visa gets 1% or more of a very sizeable fraction of the world’s transactions with this game; Swift likewise. If you ever wonder what the economic upside of the MDL business might be, just have a think about how big that number is across all forms of trusted third parties. 2000/'10s: Mutual Distributed Ledgers MDL technology securely stores transaction records in multiple locations with no central ownership. MDLs allow groups of people to validate, record and track transactions across a network of decentralized computer systems with varying degrees of control of the ledger. Everyone shares the ledger. The ledger itself is a distributed data structure held in part or in its entirety by each participating computer system. The computer systems follow a common protocol to add transactions. The protocol is distributed using peer-to-peer application architecture. MDLs are not technically new – concurrent and distributed databases have been a research area since at least the 1970s. Z/Yen built its first one in 1995. Historically, distributed ledgers have suffered from two perceived disadvantages; insecurity and complexity. These two perceptions are changing rapidly because of the growing use of blockchain technology, the MDL of choice for cryptocurrencies. Cryptocurrencies need to:
  • validate – have a trust model for time-stamping transactions by members of the community;
  • safeguard – have a set of rules for sharing data of guaranteed accuracy;
  • preserve – have a common history of transactions.
If faith in the technology’s integrity continues to grow, then MDLs might substitute for two roles of a trusted third party, preventing duplicate transactions and providing a verifiable public record of all transactions. Trust moves from the third party to the technology. Emerging techniques, such as, smart contracts and decentralized autonomous organizations, might in future also permit MDLs to act as automated agents. A cryptocurrency like bitcoin is an MDL with "mining on top." The mining substitutes for trust: "proof of work" is simply proof that you have a warehouse of expensive computers working, and the proof is the output of their calculations! Cryptocurrency blockchains do not require a central authority or trusted third party to coordinate interactions, validate transactions or oversee behavior. However, when the virtual currency is going to be exchanged for real-world assets, we come back to needing trusted third parties to trade ships or houses or automobiles for virtual currency. A big consequence may be that the first role of a trusted third party, validating an asset and identifying community members, becomes the most important. This is why MDLs may challenge the structure of financial services, even though financial services are here to stay. Boring ledgers meet smart contracts MDLs and blockchain architecture are essentially protocols that can work as well as hub-and-spoke for getting things done, but without the liability of a trusted third party in the center that might choose to exploit the natural monopoly. Even with smaller trusted third parties, MDLs have some magic properties, the same agreed data on all nodes, "distributed consensus," rather than passing data around through messages. In the future, smart contracts can store promises to pay and promises to deliver without having a middleman or exposing people to the risk of fraud. The same logic that secured "currency" in bitcoin can be used to secure little pieces of detached business logic. Smart contracts may automatically move funds in accordance with instructions given long ago, like a will or a futures contract. For pure digital assets there is no counterparty risk because the value to be transferred can be locked into the contract when it is created, and released automatically when the conditions and terms are met: If the contract is clear, then fraud is impossible, because the program actually has real control of the assets involved rather than requiring trustworthy middle-men like ATM machines or car rental agents. Of course, such structures challenge some of our current thinking on liquidity. Long Finance has a Zen-style koan, “if you have trust I shall give you trust; if you have no trust I shall take it away.” Cryptocurrencies and MDLs are gaining more and more trust. Trust in contractual relationships mediated by machines sounds like science fiction, but the financial sector has profitably adapted to the ATM machine, Visa, Swift, Big Bang, HFT and many other innovations. New ledger technology will enable new kinds of businesses, as reducing the cost of trust and fixing problems allows new kinds of enterprises to be profitable. The speed of adoption of new technology sorts winners from losers. Make no mistake: The core generation of value has not changed; banks are trusted third parties. The implication, though, is that much more will be spent on identity, such as Anti-Money-Laundering/Know-Your-Customer backed by indemnity, and asset validation, than transaction fees. A U.S. political T-shirt about terrorists and religion inspires a closing thought: “It’s not that all cheats are trusted third parties; it’s that all trusted third parties are tempted to cheat.” MDLs move some of that trust into technology. And as costs and barriers to trusted third parties fall, expect demand and supply to increase.

Michael Mainelli

Profile picture for user MichaelMainelli

Michael Mainelli

Michael Mainelli co-founded Z/Yen, the city of London’s leading commercial think tank and venture firm, in 1994 to promote societal advance through better finance and technology. Today, Z/Yen boasts a core team of 25 highly respected professionals and is well capitalized because of successful spin-outs and ventures.

Why Healthcare Costs Soar (Part 5)

Hospital mergers and acquisitions of physician practices keep driving up costs. It's high time we changed the equation.

sixthings
Readers of Cracking Health Costs know that healthcare is both complex and consuming, and an ever-greater share of GDP in the U.S., while our health outcomes are falling behind our peer countries. According to the 2015 Health Care Services Acquisition Report, the deal volume for businesses in the healthcare services sector rose 18%, with 752 transactions in 2014, for a total of $62 billion; acquisitions of physician practices accounted for $3.2 billion of the total. As healthcare suppliers continue to consolidate, what does this mean for the employers who pay for these services? With the attention around value-based contracts and affordable care organizations (ACOs), we should expect the number of ACO contracts will continue to expand beyond the 750 in existence today, and the value-based concept sounds good. But Dr. Eric Bricker’s blog pointed out that 41% of all physicians did not know if they participated in an ACO, as referenced in the Feb. 10, 2016, issue of Medical Economics magazine. Is there real motivation to change? Hospital mergers lead to average price increases of more than 20% for care, while physician prices increase nearly 14% post-acquisition. The result: The value-based contracts will be based on higher fees for the combined entities. In Part 3 of this series, the provider we mentioned built a strong reputation, which let it charge higher per-unit fees. But, when that provider enters into value-based contracts, renewals will depend on the ability to hit cost targets agreed on with the insurance companies. While the per-unit price in those contracts will be important, the Seattle provider’s biggest opportunity is to establish a more consistent process of care among its physicians, so employers stop paying for the wide variation in treatment and for unnecessary care. Here’s what we know: 1) There has been value-based contracting, 2) there has been data to assess performance and 3) yet there remains extremely wide variation in care among providers, especially for patients with complex health problems. Where such variation exists in healthcare, many people are getting substandard care. So why is there still variation? Well, if you sold a consumer product, like a flat screen TV, that had wide variation in results yet commanded a premium price and saw sales stay strong, how motivated are you to change your process? With TVs, there is ample competition. Consumers will purchase another TV brand if one is over-priced or of poor quality. But, in self-insured benefit plans, most employers have not had the appetite to take tough but necessary steps to engage in disintermediation despite the huge differences in price and quality. It’s high time for employers to replicate how purchasers in other industries have collaborated with their suppliers to address variations in process and quality and to eliminate cost inefficiencies.

Tom Emerick

Profile picture for user TomEmerick

Tom Emerick

Tom Emerick is president of Emerick Consulting and cofounder of EdisonHealth and Thera Advisors.  Emerick’s years with Wal-Mart Stores, Burger King, British Petroleum and American Fidelity Assurance have provided him with an excellent blend of experience and contacts.

Smart Homes Are Still Way Too Stupid

Many claim that the smart home represents a major shift, and opportunity, for insurers, but we're still way too early.

sixthings
It's nice to know sharp people -- in this case, Rich Jaroslovsky, a former colleague at the Wall Street Journal who is now a vice president at SmartNews. He just wrote a takedown of the smart home that saved me the trouble. I had visited the topic in a general way a year ago in an article taking issue  with something Google's executive chairman, Eric Schmidt, had said about how the Internet will disappear. My basic complaint about how even really smart people think about automation is that automation is often more trouble than it's worth and that people blithely assume I'd like to automate decisions that, in fact, I don't want automated -- no, I don't want my refrigerator ordering milk for me, my lights to always flip on a certain way when I walk through the door or my TV to always turn to ESPN when I wake up. Recent stories about the glories of the smart home made me think I needed to return to the subject, more specifically this time -- I'm cranky on the subject of the smart home because I've been hearing variations on this theme for 25 years without seeing a result; no, Nest doesn't count. I was prompted into action when I received the following in an email this morning: "Many large U.S. insurers are bracing for the impact of autonomous driving on their business, but they have yet to grasp that the same trend is at play in the homeowners and renters insurance markets. Insurers that don’t develop a value proposition around the connected home will be forced to give steeper discounts to reflect the lower risks without generating any strategic benefits. Savvy insurers that adapt to the new dynamic have a historic opportunity to become far more relevant than they are today. "Based on over 100... discussions conducted between November 2015 and February 2016 with smart-home technology vendors; P&C, health, and life insurers; venture capital firms; and technology vendors, this report examines the connected-home use case for the insurance industry, profiles two turnkey smart-home... and mentions 147 other firms." [I deleted three corporate names in there, including the author of the report, because I don't see any need to make this personal, even though you're expected to pay real money for that report.] Just when I was gearing up to write something on the smart home, though, I saw that Rich had posted his column, which begins: "With every new smart device I add to my home, it gets a little dumber. "The thermostats don’t talk to the lights. The security cameras don’t talk to the alarm system, which doesn’t talk to the garage door. The networked speakers talk to each other—but not to the TV sitting a few feet away. Just about every device has its own app for my smartphone, but since none of them work with each other, I’ve got 15 apps controlling 15 functions." I encourage you to read the whole piece, especially if you harbor hopes that the smart home is a looming opportunity. As Rich notes, you can't have a connected home if the devices don't talk to each other. And while I may have a "standard" for communication, if Rich has a separate standard and so do 87 others of you, then we don't, in fact, have a standard way of communicating. We'll get to the smart home. But not soon.

Paul Carroll

Profile picture for user PaulCarroll

Paul Carroll

Paul Carroll is the editor-in-chief of Insurance Thought Leadership.

He is also co-author of A Brief History of a Perfect Future: Inventing the Future We Can Proudly Leave Our Kids by 2050 and Billion Dollar Lessons: What You Can Learn From the Most Inexcusable Business Failures of the Last 25 Years and the author of a best-seller on IBM, published in 1993.

Carroll spent 17 years at the Wall Street Journal as an editor and reporter; he was nominated twice for the Pulitzer Prize. He later was a finalist for a National Magazine Award.