Download

Smart Homes Are Still Way Too Stupid

I'm cranky on the subject of the smart home because I've been hearing variations on this theme for 25 years without seeing a result.

sixthings

It's nice to know sharp people -- in this case, Rich Jaroslovsky, a former colleague at the Wall Street Journal who is now a vice president at SmartNews. He just wrote a takedown of the smart home that saved me the trouble.

I had visited the topic in a general way a year ago in an article taking issue  with something Google's executive chairman, Eric Schmidt, had said about how the Internet will disappear. My basic complaint about how even really smart people think about automation is that automation is often more trouble than it's worth and that people blithely assume I'd like to automate decisions that, in fact, I don't want automated -- no, I don't want my refrigerator ordering milk for me, my lights to always flip on a certain way when I walk through the door or my TV to always turn to ESPN when I wake up.

Recent stories about the glories of the smart home made me think I needed to return to the subject, more specifically this time -- I'm cranky on the subject of the smart home because I've been hearing variations on this theme for 25 years without seeing a result; no, Nest doesn't count. I was prompted into action when I received the following in an email this morning:

"Many large U.S. insurers are bracing for the impact of autonomous driving on their business, but they have yet to grasp that the same trend is at play in the homeowners and renters insurance markets. Insurers that don’t develop a value proposition around the connected home will be forced to give steeper discounts to reflect the lower risks without generating any strategic benefits. Savvy insurers that adapt to the new dynamic have a historic opportunity to become far more relevant than they are today.

"Based on over 100... discussions conducted between November 2015 and February 2016 with smart-home technology vendors; P&C, health, and life insurers; venture capital firms; and technology vendors, this report examines the connected-home use case for the insurance industry, profiles two turnkey smart-home... and mentions 147 other firms." [I deleted three corporate names in there, including the author of the report, because I don't see any need to make this personal, even though you're expected to pay real money for that report.]

Just when I was gearing up to write something on the smart home, though, I saw that Rich had posted his column, which begins:

"With every new smart device I add to my home, it gets a little dumber.

"The thermostats don’t talk to the lights. The security cameras don’t talk to the alarm system, which doesn’t talk to the garage door. The networked speakers talk to each other—but not to the TV sitting a few feet away. Just about every device has its own app for my smartphone, but since none of them work with each other, I’ve got 15 apps controlling 15 functions."

I encourage you to read the whole piece, especially if you harbor hopes that the smart home is a looming opportunity. As Rich notes, you can't have a connected home if the devices don't talk to each other. And while I may have a "standard" for communication, if Rich has a separate standard and so do 87 others of you, then we don't, in fact, have a standard way of communicating.

We'll get to the smart home.

But not soon.


Paul Carroll

Profile picture for user PaulCarroll

Paul Carroll

Paul Carroll is the editor-in-chief of Insurance Thought Leadership.

He is also co-author of A Brief History of a Perfect Future: Inventing the Future We Can Proudly Leave Our Kids by 2050 and Billion Dollar Lessons: What You Can Learn From the Most Inexcusable Business Failures of the Last 25 Years and the author of a best-seller on IBM, published in 1993.

Carroll spent 17 years at the Wall Street Journal as an editor and reporter; he was nominated twice for the Pulitzer Prize. He later was a finalist for a National Magazine Award.

When a Penalty Is Not a Penalty

The ACA creates a penalty for not purchasing health insurance -- but do the math. It's not really a penalty.

sixthings
The Affordable Care Act requires most Americans to buy qualifying health insurance coverage. Fail to comply with this mandate, and there’s a financial penalty waiting for you come tax time. But when is a penalty not a penalty? When is a mandate not a mandate? Hey, kids, let’s do some math. The penalty for going uninsured in 2016 is $695 per adult and $347.50 per child, up to a maximum of $2,085 or 2.5% of household income, whichever is greater. To determine the cost of coverage, we’ll use the second-lowest Silver plan available in a state. That’s the benchmark used to calculate ACA subsidies, and in 2015 Silver plans were roughly 68% of policies sold through an exchange. Even more important, I found a table showing the cost of the second-lowest cost Silver plan for 40-year-olds by state, but I couldn’t find a similar table for other levels. The least our 40-year-old could spend on the second-lowest Silver plan this year is $2,196, in New Mexico; the highest premium is $8,628, in Alaska. The median is $3,336. Divide the penalty by the premium, and you get 32% of the cheapest premium and 21% of the median premium. Put another way, paying the penalty saves our 40-year-old  consumer $1,500 in New Mexico and more than $2,600 in the mythical state of median. I did find a table showing the national average premium a 21-year-old would pay for a Bronze plan: $2,411.  In this situation, the $695 penalty amounts to just 29% of the policy’s cost, a savings of more than $1,700. The purpose of this post is not to encourage people to go uninsured. I think that’s financially stupid given the cost of needing health insurance coverage and not having it. And, personally, I support the individual mandate. I also understand the political obstacles to establishing a real penalty for remaining uninsured. However, I also believe the individual market in this country is in trouble. (More on this is a later post). Adverse selection is a contributing cause to this danger. The individual mandate is supposed to mitigate against adverse selection. The enforcement mechanism for that mandate, however, is a penalty that, for many people, is no penalty at all. That’s not just my opinion. That’s the math. A version of this article was originally posted on LinkedIn.

Alan Katz

Profile picture for user AlanKatz

Alan Katz

Alan Katz speaks and writes nationally on healthcare reform, technology, sales and business planning. He is author of the award-winning Alan Katz Blog and of <em>Trailblazed: Proven Paths to Sales Success</em>.

In Third Parties We (Mis)trust?

Mutual distributed ledgers, enabled by blockchain, are changing the nature of trust in financial services, with profound implications.

sixthings
Technology is transforming trust. Never before has there been a time when it’s been easier to start a distant geographical relationship. With a credible website and reasonable products or services, people are prepared to learn about companies half a world away and enter into commerce with them. Society is changing radically when people find themselves trusting people with whom they’ve had no experience, e.g. on eBay or Facebook, more than with banks they’ve dealt with their whole lives. Mutual distributed ledgers pose a threat to the trust relationship in financial services. The History of Trust Trust leverages a history of relationships to extend credit and benefit of the doubt to someone. Trust is about much more than money; it’s about human relationships, obligations and experiences and about anticipating what other people will do. In risky environments, trust enables cooperation and permits voluntary participation in mutually beneficial transactions that are otherwise costly to enforce or cannot be enforced by third parties. By taking a risk on trust, we increase the amount of cooperation throughout society while simultaneously reducing the costs, unless we are wronged. Trust is not a simple concept, nor is it necessarily an unmitigated good, but trust is the stock-in-trade of financial services. In reality, financial services trade on mistrust. If people trusted each other on transactions, many financial services might be redundant. People use trusted third parties in many roles in finance, for settlement, as custodians, as payment providers, as poolers of risk. Trusted third parties perform three roles:
  • validate – confirming the existence of something to be traded and membership of the trading community;
  • safeguard – preventing duplicate transactions, i.e. someone selling the same thing twice or "double-spending";
  • preserve – holding the history of transactions to help analysis and oversight, and in the event of disputes.
A ledger is a book, file or other record of financial transactions. People have used various technologies for ledgers over the centuries. The Sumerians used clay cuneiform tablets. Medieval folk split tally sticks. In the modern era, the implementation of choice for a ledger is a central database, found in all modern accounting systems. In many situations, each business keeps its own central database with all its own transactions in it, and these systems are reconciled, often manually and at great expense if something goes wrong. But in cases where many parties interact and need to keep track of complex sets of transactions they have traditionally found that creating a centralized ledger is helpful. A centralized transaction ledger needs a trusted third party who makes the entries (validates), prevents double counting or double spending (safeguards) and holds the transaction histories (preserves). Over the ages, centralized ledgers are found in registries (land, shipping, tax), exchanges (stocks, bonds) or libraries (index and borrowing records), just to give a few examples. The latest technological approach to all of this is the distributed ledger (aka blockchain aka distributed consensus ledger aka the mutual distributed ledger, or MDL, the term we’ll stick to here). To understand the concept, it helps to look back over the story of its development:  1960/'70s: Databases The current database paradigm began around 1970 with the invention of the relational model, and the widespread adoption of magnetic tape for record-keeping. Society runs on these tools to this day, even though some important things are hard to represent using them. Trusted third parties work well on databases, but correctly recording remote transactions can be problematic. One approach to remote transactions is to connect machines and work out the lumps as you go. But when data leaves one database and crosses an organizational boundary, problems start. For Organization A, the contents of Database A are operational reality, true until proven otherwise. But for Organization B, the message from A is a statement of opinion. Orders sit as “maybe” until payment is made, and is cleared past the last possible chargeback: This tentative quality is always attached to data from the outside. 1980/'90s: Networks Ubiquitous computer networking came of age two decades after the database revolution, starting with protocols like email and hitting its full flowering with the invention of the World Wide Web in the early 1990s. The network continues to get smarter, faster and cheaper, as well as more ubiquitous – and it is starting to show up in devices like our lightbulbs under names like the Internet of Things. While machines can now talk to each other, the systems that help us run our lives do not yet connect in joined-up ways. Although in theory information could just flow from one database to another with your permission, in practice the technical costs of connecting databases are huge. Worse, we go back to paper and metaphors from the age of paper because we cannot get the connection software right. All too often, the computer is simply a way to fill out forms: a high-tech paper simulator. It is nearly impossible to get two large entities to share our information between them on our behalf. Of course, there are attempts to clarify this mess – to introduce standards and code reusability to help streamline business interoperability. You can choose from EDI, XMI-EDI, JSON, SOAP, XML-RPC, JSON-RPC, WSDL and half a dozen more standards to “assist” your integration processes. The reason there are so many standards is because none of them finally solved the problem. Take the problem of scaling collaboration. Say that two of us have paid the up-front costs of collaboration and have achieved seamless technical harmony, and now a third partner joins our union, then a fourth and a fifth … by five partners, we have 13 connections to debug, by 10 partners the number is 45. The cost of collaboration keeps going up for each new partner as they join our network, and the result is small pools of collaboration that just will not grow. This isn’t an abstract problem – this is banking, this is finance, medicine, electrical grids, food supplies and the government. A common approach to this quadratic quandary is to put somebody in charge, a hub-and-spoke solution. We pick an organization – Visa would be typical – and all agree that we will connect to Visa using its standard interface. Each organization has to get just a single connector right. Visa takes 1% off the top, making sure that everything clears properly. But while a third party may be trusted, it doesn’t mean it is trustworthy. There are a few problems with this approach, but they can be summarized as "natural monopolies." Being a hub for others is a license to print money for anybody that achieves incumbent status. Visa gets 1% or more of a very sizeable fraction of the world’s transactions with this game; Swift likewise. If you ever wonder what the economic upside of the MDL business might be, just have a think about how big that number is across all forms of trusted third parties. 2000/'10s: Mutual Distributed Ledgers MDL technology securely stores transaction records in multiple locations with no central ownership. MDLs allow groups of people to validate, record and track transactions across a network of decentralized computer systems with varying degrees of control of the ledger. Everyone shares the ledger. The ledger itself is a distributed data structure held in part or in its entirety by each participating computer system. The computer systems follow a common protocol to add transactions. The protocol is distributed using peer-to-peer application architecture. MDLs are not technically new – concurrent and distributed databases have been a research area since at least the 1970s. Z/Yen built its first one in 1995. Historically, distributed ledgers have suffered from two perceived disadvantages; insecurity and complexity. These two perceptions are changing rapidly because of the growing use of blockchain technology, the MDL of choice for cryptocurrencies. Cryptocurrencies need to:
  • validate – have a trust model for time-stamping transactions by members of the community;
  • safeguard – have a set of rules for sharing data of guaranteed accuracy;
  • preserve – have a common history of transactions.
If faith in the technology’s integrity continues to grow, then MDLs might substitute for two roles of a trusted third party, preventing duplicate transactions and providing a verifiable public record of all transactions. Trust moves from the third party to the technology. Emerging techniques, such as, smart contracts and decentralized autonomous organizations, might in future also permit MDLs to act as automated agents. A cryptocurrency like bitcoin is an MDL with "mining on top." The mining substitutes for trust: "proof of work" is simply proof that you have a warehouse of expensive computers working, and the proof is the output of their calculations! Cryptocurrency blockchains do not require a central authority or trusted third party to coordinate interactions, validate transactions or oversee behavior. However, when the virtual currency is going to be exchanged for real-world assets, we come back to needing trusted third parties to trade ships or houses or automobiles for virtual currency. A big consequence may be that the first role of a trusted third party, validating an asset and identifying community members, becomes the most important. This is why MDLs may challenge the structure of financial services, even though financial services are here to stay. Boring ledgers meet smart contracts MDLs and blockchain architecture are essentially protocols that can work as well as hub-and-spoke for getting things done, but without the liability of a trusted third party in the center that might choose to exploit the natural monopoly. Even with smaller trusted third parties, MDLs have some magic properties, the same agreed data on all nodes, "distributed consensus," rather than passing data around through messages. In the future, smart contracts can store promises to pay and promises to deliver without having a middleman or exposing people to the risk of fraud. The same logic that secured "currency" in bitcoin can be used to secure little pieces of detached business logic. Smart contracts may automatically move funds in accordance with instructions given long ago, like a will or a futures contract. For pure digital assets there is no counterparty risk because the value to be transferred can be locked into the contract when it is created, and released automatically when the conditions and terms are met: If the contract is clear, then fraud is impossible, because the program actually has real control of the assets involved rather than requiring trustworthy middle-men like ATM machines or car rental agents. Of course, such structures challenge some of our current thinking on liquidity. Long Finance has a Zen-style koan, “if you have trust I shall give you trust; if you have no trust I shall take it away.” Cryptocurrencies and MDLs are gaining more and more trust. Trust in contractual relationships mediated by machines sounds like science fiction, but the financial sector has profitably adapted to the ATM machine, Visa, Swift, Big Bang, HFT and many other innovations. New ledger technology will enable new kinds of businesses, as reducing the cost of trust and fixing problems allows new kinds of enterprises to be profitable. The speed of adoption of new technology sorts winners from losers. Make no mistake: The core generation of value has not changed; banks are trusted third parties. The implication, though, is that much more will be spent on identity, such as Anti-Money-Laundering/Know-Your-Customer backed by indemnity, and asset validation, than transaction fees. A U.S. political T-shirt about terrorists and religion inspires a closing thought: “It’s not that all cheats are trusted third parties; it’s that all trusted third parties are tempted to cheat.” MDLs move some of that trust into technology. And as costs and barriers to trusted third parties fall, expect demand and supply to increase.

Michael Mainelli

Profile picture for user MichaelMainelli

Michael Mainelli

Michael Mainelli co-founded Z/Yen, the city of London’s leading commercial think tank and venture firm, in 1994 to promote societal advance through better finance and technology. Today, Z/Yen boasts a core team of 25 highly respected professionals and is well capitalized because of successful spin-outs and ventures.

Why Healthcare Costs Soar (Part 5)

Hospital mergers and acquisitions of physician practices keep driving up costs. It's high time we changed the equation.

sixthings
Readers of Cracking Health Costs know that healthcare is both complex and consuming, and an ever-greater share of GDP in the U.S., while our health outcomes are falling behind our peer countries. According to the 2015 Health Care Services Acquisition Report, the deal volume for businesses in the healthcare services sector rose 18%, with 752 transactions in 2014, for a total of $62 billion; acquisitions of physician practices accounted for $3.2 billion of the total. As healthcare suppliers continue to consolidate, what does this mean for the employers who pay for these services? With the attention around value-based contracts and affordable care organizations (ACOs), we should expect the number of ACO contracts will continue to expand beyond the 750 in existence today, and the value-based concept sounds good. But Dr. Eric Bricker’s blog pointed out that 41% of all physicians did not know if they participated in an ACO, as referenced in the Feb. 10, 2016, issue of Medical Economics magazine. Is there real motivation to change? Hospital mergers lead to average price increases of more than 20% for care, while physician prices increase nearly 14% post-acquisition. The result: The value-based contracts will be based on higher fees for the combined entities. In Part 3 of this series, the provider we mentioned built a strong reputation, which let it charge higher per-unit fees. But, when that provider enters into value-based contracts, renewals will depend on the ability to hit cost targets agreed on with the insurance companies. While the per-unit price in those contracts will be important, the Seattle provider’s biggest opportunity is to establish a more consistent process of care among its physicians, so employers stop paying for the wide variation in treatment and for unnecessary care. Here’s what we know: 1) There has been value-based contracting, 2) there has been data to assess performance and 3) yet there remains extremely wide variation in care among providers, especially for patients with complex health problems. Where such variation exists in healthcare, many people are getting substandard care. So why is there still variation? Well, if you sold a consumer product, like a flat screen TV, that had wide variation in results yet commanded a premium price and saw sales stay strong, how motivated are you to change your process? With TVs, there is ample competition. Consumers will purchase another TV brand if one is over-priced or of poor quality. But, in self-insured benefit plans, most employers have not had the appetite to take tough but necessary steps to engage in disintermediation despite the huge differences in price and quality. It’s high time for employers to replicate how purchasers in other industries have collaborated with their suppliers to address variations in process and quality and to eliminate cost inefficiencies.

Tom Emerick

Profile picture for user TomEmerick

Tom Emerick

Tom Emerick is president of Emerick Consulting and cofounder of EdisonHealth and Thera Advisors.  Emerick’s years with Wal-Mart Stores, Burger King, British Petroleum and American Fidelity Assurance have provided him with an excellent blend of experience and contacts.

Smart Homes Are Still Way Too Stupid

Many claim that the smart home represents a major shift, and opportunity, for insurers, but we're still way too early.

sixthings
It's nice to know sharp people -- in this case, Rich Jaroslovsky, a former colleague at the Wall Street Journal who is now a vice president at SmartNews. He just wrote a takedown of the smart home that saved me the trouble. I had visited the topic in a general way a year ago in an article taking issue  with something Google's executive chairman, Eric Schmidt, had said about how the Internet will disappear. My basic complaint about how even really smart people think about automation is that automation is often more trouble than it's worth and that people blithely assume I'd like to automate decisions that, in fact, I don't want automated -- no, I don't want my refrigerator ordering milk for me, my lights to always flip on a certain way when I walk through the door or my TV to always turn to ESPN when I wake up. Recent stories about the glories of the smart home made me think I needed to return to the subject, more specifically this time -- I'm cranky on the subject of the smart home because I've been hearing variations on this theme for 25 years without seeing a result; no, Nest doesn't count. I was prompted into action when I received the following in an email this morning: "Many large U.S. insurers are bracing for the impact of autonomous driving on their business, but they have yet to grasp that the same trend is at play in the homeowners and renters insurance markets. Insurers that don’t develop a value proposition around the connected home will be forced to give steeper discounts to reflect the lower risks without generating any strategic benefits. Savvy insurers that adapt to the new dynamic have a historic opportunity to become far more relevant than they are today. "Based on over 100... discussions conducted between November 2015 and February 2016 with smart-home technology vendors; P&C, health, and life insurers; venture capital firms; and technology vendors, this report examines the connected-home use case for the insurance industry, profiles two turnkey smart-home... and mentions 147 other firms." [I deleted three corporate names in there, including the author of the report, because I don't see any need to make this personal, even though you're expected to pay real money for that report.] Just when I was gearing up to write something on the smart home, though, I saw that Rich had posted his column, which begins: "With every new smart device I add to my home, it gets a little dumber. "The thermostats don’t talk to the lights. The security cameras don’t talk to the alarm system, which doesn’t talk to the garage door. The networked speakers talk to each other—but not to the TV sitting a few feet away. Just about every device has its own app for my smartphone, but since none of them work with each other, I’ve got 15 apps controlling 15 functions." I encourage you to read the whole piece, especially if you harbor hopes that the smart home is a looming opportunity. As Rich notes, you can't have a connected home if the devices don't talk to each other. And while I may have a "standard" for communication, if Rich has a separate standard and so do 87 others of you, then we don't, in fact, have a standard way of communicating. We'll get to the smart home. But not soon.

Paul Carroll

Profile picture for user PaulCarroll

Paul Carroll

Paul Carroll is the editor-in-chief of Insurance Thought Leadership.

He is also co-author of A Brief History of a Perfect Future: Inventing the Future We Can Proudly Leave Our Kids by 2050 and Billion Dollar Lessons: What You Can Learn From the Most Inexcusable Business Failures of the Last 25 Years and the author of a best-seller on IBM, published in 1993.

Carroll spent 17 years at the Wall Street Journal as an editor and reporter; he was nominated twice for the Pulitzer Prize. He later was a finalist for a National Magazine Award.

Chip Cards Will Cut Cyber Fraud -- for Now

The new chip cards will cut crime, at least for big retailers, but the U.K. and Canada show that online credit card fraud will increase.

sixthings
Visa has released data showing adoption of Visa chip cards by U.S. banks and merchants is gathering steam. But the capacity for Europay-Mastercard-Visa (EMV) chip cards to swiftly and drastically reduce payment card fraud in the U.S. is by no means assured. Just look north to Canada, where EMV cards have been in wide use since 2011. Criminals have simply shifted fraudulent use of payment card accounts to online purchases—where the physical card does not come into play. Security and banking experts expect a similar pattern to play out in the U.S., where banks and merchants are under an October 2015 deadline, imposed by Visa and MasterCard, for adopting EMV systems. Free resource: Putting effective data risk management within reach Heeding that deadline, major retail chains and big banks are driving up adoption numbers in the U.S. However, thousands of small and mid-sized businesses continue to remain on the fence. SMBs slower to switch SMBs are methodically assessing the risk vs. reward of racing to adopt EMV, Brian Engle tells ThirdCertainty. Engle is executive director of the newly founded Retail Cyber Intelligence Sharing Center, or R-CISC.
Brian Engle, Retail Cyber Intelligence Sharing Center executive director
Brian Engle, Retail Cyber Intelligence Sharing Center executive director
  Company decision-makers are doing their due diligence, factoring in the potential for fraud, the cost of implementing EMV technology and the risk of chargebacks, he says. “From a transactional volume perspective, some are going to accept risks and move at a rate that’s more appropriate for the size of their organization,” Engle says. There’s no question the U.S. is in EMV saturation mode. As of the end of 2015, Visa tells us:
  • The volume of chip transactions in the U.S. increased from $12.1 billion in November to $15.8 billion in December, a 30% pop.
  • Seven out of 10 Americans now have at least one chip card in their wallet.
  • 93% of consumers are aware that the transition to EMV is happening.
Cryptogram makes things more complicated Unlike magnetic-stripe cards, EMV cards are more difficult to counterfeit because the chip contains a cryptogram. When the card is inserted into the point of sale (POS) terminal—vs. being swiped—the cryptogram creates a token that’s unique to each transaction, and all the information is encrypted as it’s transmitted to the terminal and the bank. This process actually takes a few seconds, during which the consumer must leave her card inserted in the POS terminal. U.S consumers are in the process of modifying their behavior at the checkout stand. Patience for a few seconds is required. Those precious seconds of inconvenient waiting represent an investment in tighter security. But not as tight as when you use a chip card in Canada or Europe. That’s because EMV cards not only generate a one-time authorization token, they are also designed to require the user to enter a PIN as a second factor of authentication. However, PIN compliance was not part of the October 2015 deadline. Thus, most EMV in-store transactions in the U.S. still require only a signature, which, of course, any impostor can forge. Criminals, on the other hand, won’t be able to hack into store networks and steal any useful transactions data, at least not any in which chip cards were used. “Even if you steal the information, it becomes very difficult to use it. You’d get a long string of letters and numbers that can’t do anything,” explains Ben Knieff, senior analyst for retail banking at Aite Group, an independent research and advisory firm that specializes in financial services. Criminals reportedly were able to breach Wendy’s customer magnetic strip payment card data, recently. That data breach was disclosed after numerous stolen card numbers were subsequently used at other merchants, and the trail led back to Wendy’s. This kind of credit card fraud is exactly why U.S. financial institutions are migrating from the magnetic-stripe cards to new technology that uses a much more secure chip. Aite Group estimates that EMV will significantly reduce U.S. counterfeit card fraud—from an estimated peak of $3.61 billion in 2015 to $1.77 billion in 2018.
Scott Schober, Berkeley Varitronics Systems Inc. president and CEO
Scott Schober, Berkeley Varitronics Systems Inc. president and CEO
  Even so, the technology is not foolproof because bad actors can use other tricks. “The EMV technology is still hackable,” says Scott Schober, president and CEO of Berkeley Varitronics Systems Inc., which specializes in wireless threat detection. “However, hackers are going to go after the simple hack.” Identity theft experts anticipate that fraudsters will simply shift their attention to merchants that use mobile payments—or don’t use a physical POS terminal at all. “For bad actors, when one avenue dries up, they will look for other ways,” says Numaan Huq, a Canada-based senior threat researcher with Trend Micro’s Forward-Looking Threat Research Team. Some transactions safer than others In Canada, where point-to-point encryption is now standard for retailers, Huq says he feels very safe when using a credit card in stores. But at places like hotels? Not so much. That’s because hotels collect credit card information for reservations, and, when that system is hacked, all the data is compromised. The same goes for various service providers, like medical offices. “Bad actors will find new avenues, and I expect, over time, the fraud levels (in the U.S.) will go up again,” Huq says. That’s what happened in Canada, the U.K. and other countries that have adopted EMV. Canada, for example, saw a 54% decline in counterfeit cards and 133% jump in “card-not-present” (CNP) fraud between 2008 and 2013, according to Aite Group research. “In the past, most of the tools hackers used were extremely crude,” Schober says. “But advances in technology are making it much easier to compromise people online.” Aite estimates that CNP fraud in the U.S. will grow from $2.9 billion to $6.4 billion, as hackers shift their tactics. But, Knieff says, criminals have one thing going against them—online credit card fraud is not a scalable “business.” Criminals can’t buy 40 TVs from Amazon.com, for example. “Application fraud—using stolen or synthetic identities to open new accounts … becomes much more attractive,” he says. “Yes, CNP will increase, but it will not increase geometrically because it’s hard to scale.” Many organizations may not even be ready to focus on securing their online systems. Engle, of R-CISC, uses a hockey analogy, saying retailers are “trying to skate to where the puck is going.” That is, at the moment they’re still trying to figure out the transition to EMV. SMBs particularly vulnerable In the meantime, smaller businesses face an increased risk. “The fraudsters will utilize POS malware until they can’t, and those smaller retailers are going to continue to be in their cross-hairs,” he says. “The ability to affect small retailers at a high rate is very profitable for them.” Attacks on large retailers take a lot more time and resources, Huq says. “A small mom-and-pop shop is a no-brainer to hit,” he says, adding that mobile payments, especially, are a concern because of proliferation of malware, particularly for Android systems. “It’s easy to use for small businesses because it costs less,” he says. “But in the future, I think this will be a new way for bad actors to steal credit card data.” This post was written by Rodika Tollefson.

Byron Acohido

Profile picture for user byronacohido

Byron Acohido

Byron Acohido is a business journalist who has been writing about cybersecurity and privacy since 2004, and currently blogs at LastWatchdog.com.

Competing in an Age of Data Symmetry

It used to be that companies could develop proprietary information and, thus, a sustainable competitive advantage. Not for long.

|data symmetry
For centuries, people have lived in a world where data was largely proprietary, creating asymmetry. Some had it. Others did not. Information was a currency. Some organizations held it, and profited from it. We are now entering an era of tremendous data balance — a period of data symmetry that will rewrite how companies differentiate themselves. The factors that move the world toward data symmetry are time, markets, investment and disruption. Consider maps and the data they contained. Not long ago, paper maps, travel books and documentaries offered the very best views of geographic locations. Today, Google allows us to cruise nearly any street in America and get a 360° view of homes, businesses and scenery. Electronic devices guide us along the roadways and calculate our ETA. A long-established map company such as Rand McNally now has to compete with GPS up-and-comers, selling “simple apps” with the same information. They all have access to the same data. When it comes to the symmetry of geographic data, the Earth is once again flat. Data symmetry is rewriting business rules across industries and markets every day. Insurance is just one industry where it is on the rise. For insurers to overcome the new equality of data access, they will need to understand both how data is becoming symmetrical and how they can re-envision their uniqueness in the market. It will be helpful to first understand how data is moving from asymmetrical to symmetrical. Let’s use claims as an example. Until now, the insurer’s best claims data was found in its own stockpile of claims history and demographics. An insurer that was adept at managing this data and applied actuarial science would find itself in a better position to assess risk. Competitively, it could rise to the top of the pack by pricing appropriately and acquiring appropriately. Today, all of that information is still very relevant. However, in the absence of that information, an insurer could also rely upon a flood of data streams coming from other sources. Risk assessment is no longer confined to historical data, nor is it confined to answers to questions and personal reports. Risk data can be found in areas as simple as cell phone location data — an example of digital exhaust. Digital exhaust as a source of symmetry Digital exhaust is the data trail that all of us leave on the digital landscape. Recently, the New York City Housing Authority wished to determine if the “named” renter was the one actually living in a rent-controlled apartment. A search of cell phone tower location records, cross-referenced to a renter’s information, was able to establish the validity of renter occupation. That is just one example of digital exhaust data being used as a verification tool. Another example can be found in Google’s Waze app. Because I use Waze, Google now holds my complete driving history — a telematics treasure trove of locations, habits, schedules and preferences. The permissions language allows Waze to access my calendars and contacts. With all of this, in conjunction with other Google data sets, Google can create a fairly complete picture of me. This, too, is digital exhaust. As auto insurers are proving each day, cell phone data may be more informative to proper pricing than previous claims history. How long is it until auto insurers begin to look at location risk, such as too much time spent in a bar or frequent driving through high-crime ZIP codes? If ZIP codes matter for where a car is parked each night, why wouldn’t they matter for where it spends the day? Data aggregators as a source of symmetry In addition to digital exhaust, data aggregators and scoring are also flattening the market and bringing data symmetry to markets. Mortgage lenders are a good example from outside the industry. Most mortgage lenders pay far more attention to comprehensive credit scores than an individual’s performance within their own lending operation. The outside data matters more than the inside data, because the outside data gives a more complete picture of the risk, compiled from a greater number of sources. Within insurance, we can find a dozen or more ways that data acquisition, consolidation and scoring is bringing data symmetry to the industry. Quest Diagnostics supplies scored medical histories and pharmaceutical data to life insurers — any of whom wish to pay for it. RMS, AIR Worldwide, EQECAT and others turn meteorological and geographical data into shared risk models for P&C insurers. That kind of data transformation can happen in nearly any stream of data. Motor vehicle records are scored by several agencies. Health data streams could also be scored for life and health insurers. Combined scores could be automatically evaluated and placed into overall scores. Insurers could simply dial up or dial down their acceptance based on their risk tolerance and pricing. Data doesn’t seem to stay hidden. It has value. It wants to be collected, sold and used. Consider all the data sources I will soon be able to tap into without asking any questions. (This assumes I have permissions, and barring changes in regulation.)
  • Real-time driving behavior.
  • Travel information.
  • Retail purchases and preferences.
  • Mobile statistics.
  • Exercise or motion metrics.
  • Household or company (internal) data coming from connected devices.
  • Household or company (external) data coming from geographic databases.
These data doors, once opened, will be opened for all. They are opening on personal lines first, but they will open on commercial lines, as well. Now that we have established that data symmetry is real, and we see how it will place pressure upon insurers, it makes sense to look at how insurers will use data and other devices to differentiate themselves. In Part 2 of this blog, we’ll look at how this shift in data symmetry is forcing insurers to ask new questions. Are there ways they can expand their use of current data? Are there additional data streams that may be untapped? What does the organization have or do that is unique? The goal is for insurers to innovate around areas of differentiation. This will help them rise above the symmetry, embracing data’s availability to re-envision their uniqueness.

John Johansen

Profile picture for user JohnJohansen

John Johansen

John Johansen is a senior vice president at Majesco. He leads the company's data strategy and business intelligence consulting practice areas. Johansen consults to the insurance industry on the effective use of advanced analytics, data warehousing, business intelligence and strategic application architectures.

The New Age of Insurance Aggregators

Lead generators and call-center agencies show promise, as aggregators gain ground, but "digital agencies" are the future.

sixthings
Tech innovation is coming to insurance, but where and when it strikes is uneven.  Auto and health insurance have been facing serious disruption, for instance, but for very different reasons (self-driving cars and telematics, vs. the ACA and hospital mega-mergers). Life insurance and commercial P&C are only now feeling disruption. Reinsurance and annuities are following behind. To see trends, then, it can be instructive to focus on specific insurance functions rather than the type (market vertical). Distribution — that is, sales and marketing — is one area that has been especially active compared with other functions such as underwriting, risk, investments, admins/support or claims. Why disrupt distribution? It’s where the money is. In general, when a P&C or life insurer gets $1 in premium, 40 cents goes to distribution (marketing/sales costs, i.e. the agent) and 50 cents goes to everything else (underwriting, claims, service/support, risk, fraud, product, executives, etc.). Only 10 cents is profit. The largest distribution cost is usually agent commissions, which range from 50% to 130% of a policy’s first year premium. It’s easy for carriers to work with alternative distribution channels. Insurance carriers are used to third-party distribution. They have been using independent agents, wholesale agents and affiliates (e.g., sales through AARP) for years. Systems are already in place to easily take on new distribution outlets. . The rise of insurance aggregators Aggregators are simply comparison shopping sites — like kayak.com for insurance. They allow consumers to easily compare product features, carriers, coverage and price. They aren’t the only distribution disruptors, but new developments are making them more potent. Comparison sites come in three general flavors: lead generatorscall-center-based agencies and digital agencies. From their websites, it can be difficult to tell them apart, but they operate differently and appeal to different investors. Lead generators — such as InsWebNetQuote and Insurance.com — use a comparison shopping format to entice insurance shoppers to provide personal information. They then sell these leads, often to traditional brick-and-mortar insurance agencies. Lead generators specialize in either gathering lots of leads cheaply, or curating data to sell fewer but higher-value customer referrals. Lead generation is a specialized technique, an art even. But it’s mostly unrelated to emerging technology. It is difficult for non-lead-gen experts to assess the quality and sustainability of lead-gen platforms. Call-center agencies can develop leads or purchase them; in any case, their call centers employ licensed insurance agents who make sales. A classic example: SelectQuote, founded in 1985 and known for its early TV ads, is now the largest direct channel for life insurance. Goji is doing this well in the auto insurance space and now has several hundred licensed call center insurance agents. Call-center agencies are also great businesses. Their core competency, however, isn’t technology. It is HR hiring and training. Hiring and training a large sales force and managing its churn is growth-limiting. A commissioned call center sales force might reach 300 agents, but it is almost impossible to get to 1,000 or more high-quality agents. Crucially, too, the model does not leverage technology, so margins depend on commissions remaining high. Digital agencies allow customers to shop and buy entirely (or nearly so) online. Without a human sales force, they must create well-thought-out user experiences that make the process of buying insurance transparent and understandable. Typically, this requires building sophisticated interfaces into multiple carriers’ systems so that the customer experience is unrelated to the company selected. In general, digital agencies hire developer talent rather than sales talent. Esurance, one of the first successes in the space, was bought by AllState for $1 billion in 2011. A more recent example (and AXA Strategic Ventures portfolio company) is PolicyGenius, which is bringing this digital model to life and disability insurance. I believe digital agencies are the future. They focus on technology to sell policies without the aid of a commissioned human agent. This is a crucial distinction because, while both call-center agencies and digital agencies generate income from commissions, call-center agencies have to split that commission with their licensed call center agent while digital agencies do not. This means that, at scale, digital agencies should have higher profit margins. In fact, it is likely that digital agencies will actually look to lower commissions to drive sales and to provide more competitive pricing than human agents or call-center agencies. Insurance Aggregators 3.3.16.png Insights from the U.K. The U.K. insurance markets adopted the aggregator model earlier than U.S. carriers, which gives us a window into their potential future. What’s happened there shows us three things. First, aggregators have captured a material share of the insurance market and are continuing to grow. A recent Accenture survey found that, in the U.K., “aggregators account for 60% to 70% of new business premiums in the private automobile insurance market” and that French aggregators have seen “18% average annual growth for the past five years.“ We’re already starting to see this in the U.S.: Oliver Wyman recently reported that “the number of insurance policies sold online has grown more than 400% over the last eight years.” Swiss Re said, “More than half of consumers say they are likely to use comparison websites to help make purchase decisions about insurance in the future.” Second, aggregators eventually will compete with one another across all personal lines of insurance. U.S. digital agencies today focus on one type of insurance (life or auto or home), though they might claim to offer a few others. But almost all U.K. aggregators compete across all personal line insurance products. Third, insurance carriers will get into the aggregator game. Two of the top three U.K. aggregators sold to U.K. carriers; GoCompare was purchased by esure, and Confused.com was purchased by the Admiral Group. Only MoneySuperMarket remains independent. And carriers are looking to create aggregators from scratch. Accenture’s report noted that 83% of U.K. carriers are “considering setting up their own aggregator sites.” Start-ups, tech giants and carriers in the ring Still, it’s not totally clear how digital distribution will play out here. Multiple start-up digital agencies have raised significant capital. PolicyGenius and Coverhound have raised more than $70 million, $50 million of it just in the last six months. This represents a fraction of what a major tech player (say Google or Amazon) could put toward an effort to enter this market. Interestingly, Google purchased the U.K. aggregator BeatThatQuote in 2011 and launched the California auto insurance aggregator Google Compare less than a year ago, but just announced it was shutting it down. That could be because Google Compare functioned as lead-gen for CoverHound and Google decided the fee it received per lead was cannibalizing its ~$50 per click ad-sense revenue from auto insurance search terms. Long-term success in insurance requires focus, deep knowledge of the industry and deep knowledge of the consumer. Insurance is very different from most e-commerce products, and Google’s experience could be indicative of the difficulty big digital brands will have trying to crack the insurance aggregator market. Finally, most large American carriers haven’t decided what to do. Purchasing an aggregator creates strange incentives, potentially driving customers to a competitor. At the same time, it also gives the insurer the opportunity to quietly select the risk it wants to keep and pass off the risk it’d prefer to give up. Progressive has had mixed success. Final thoughts I think the U.S. will see trends and dynamics similar to the U.K.’s, and soon. Within three years, the major digital agencies will start to compete fiercely, and, within five, one or more will have been purchased by a carrier. More digital agencies also will tackle the complex insurance products: annuities, permanent life insurance and commercial insurance. Right now, start-ups are trying — Abaris for annuities and  Insureon and CoverWallet for commercial insurance — but their offerings aren’t yet as developed as Policy Genius or CoverHound. Finally, I think the rise of digital financial advice platforms (a.k.a, robo-advisers or “robos”) give digital insurance agencies an interesting channel to consumers that will help at least one of them mature and grow to an IPO. I asked two digital agency CEOs what they thought the future was going to bring. Here is what they said: Jennifer Fitzgerald, PolicyGenius’ CEO, said, “Consumers are much more self-directed in the digital age, so the focus is giving potential insurance clients the tools they need — instant and accurate quotes, transparent product recommendations, educational resources — so they can go through the process at their own pace. Then it’s important be there for them with an intuitive, easy-to-use platform and service when they’re ready to buy. That’s the basis for the new wave of insurance education tools like the PolicyGenius Insurance Calculator, and is reshaping how consumers look at insurance.” Matt Carey from Abaris said, “I think we’ll soon see a new wave of made-for-online products. Carriers have always gone to great lengths to create products that made sense for a specific channel.  The Internet will be no different. In our business, that probably means very simple lifetime income products that are subscription-based and have low minimums. Until then, I don’t think we’ll reach a tipping point in the migration from offline to online.”

Drew Aldrich

Profile picture for user DrewAldrich

Drew Aldrich

Drew Aldrich is a senior associate at AXA Strategic Ventures, a $250 million venture capital firm focused on early-stage companies. He co-founded and led CalendarFly.com, an education technology company, and has been an active member in the NYC start-up community since 2008. As a founder of the Burgerator Burger Club, he is a recognized "expert" on where to find the best burger in NYC.

3 Ways IT Spending Is Changing

It’s not just competition from start-ups that causes upheaval. It’s also the response within incumbents as they feel the pressure of technology.

sixthings
The technological environment in which most businesses operate continues to grow more complex and competitive, at an ever-faster pace. It’s not just the competition from innovative, well-funded start-ups that causes upheaval. It’s also the response within established incumbents as they feel the pressure of digital technology. Three examples follow. • New spending patterns. Budgets are shifting to reflect the new realities in IT: lower costs with cloud-based services, digital technology that permeates every aspect of the business and business leaders’ increased awareness of the art of the possible. Business units accustomed to depending on shared functional resources for, say, mobile customer apps now feel free to engage outside resources to develop their own. Departments can opt for pay-as-you-go collaboration services instead of investing in stand-alone systems, and functions such as marketing have their own tools with which to collect, analyze and act upon data. In 2015, the majority of technology spending (68%) came from budgets outside the IT organization, a significant increase from 47% the prior year. Although the democratization of technology across an organization is generally a good thing, it can have such unintended consequences as duplicative efforts, incompatible systems, inadequate attention given to cyber-risks and off-strategy investments. • New digital leadership. Enterprise technology used to be the sole domain of the IT function, led by the CIO. Now there is a trend toward broader-based oversight. Some companies are expanding the CIO role to foster a more direct connection between technology and strategy. Other companies are creating a chief digital officer (CDO) or similar role to lead digital transformation efforts. In some companies, titles for leaders who oversee digital strategy include the chief experience officer and chief data scientist. This trend focuses C-suite attention on a company’s Digital IQ, which is valuable; however, it also adds to potential uncertainty regarding responsibilities and governance. • A new digital debate. Every company has its own point of view about the value of digital technology and how it should be managed. Some of the executives we surveyed define digital as activities related only to the innovation of products and services. Others see it as integrating technology into all parts of the business. Still others say digital is merely a synonym for IT, and some use the term in reference to customer-facing initiatives or data analytics activities. Does this splitting of hairs over definitions really matter? It does if the CEO means one thing and members of the executive team hear something else, especially if it isn’t fully clear who is accountable for the digital strategy. All this fluidity creates a considerable challenge for business leaders intent on capitalizing on digital technology. Thankfully, there are ways of raising your Digital IQ. You can integrate your digital strategy and business strategy, which means getting top leadership directly involved; you can redesign your innovation practices; and you can invest in a few critical forms of digital prowess, including data analytics, cybersecurity and the building of a digital road map. This piece was written with:
  • Tom Puthiyamadam, a principal with PwC US, based in New York. He leads the firm’s management consulting practice. He also leads its digital services practice and oversees its Experience Center, which helps clients create next-generation experiences for their customers, employees and partners.
  • Chrisie Wendin, an editor and technology writer with PwC’s Thought Leadership Institute, based in Silicon Valley.

Chris Curran

Profile picture for user ChrisCurran

Chris Curran

Chris Curran is a principal and chief technologist for PwC's advisory practice in the U.S. Curran advises senior executives on their most complex and strategic technology issues and has global experience in designing and implementing high-value technology initiatives across industries.

Weirdest Insurance Claims Fraud Ever

Over the years, there have been high-profile cases of claims fraud. Here are eight of the oddest claims ever made.

sixthings
info

Damien Gallagher

Profile picture for user DamienGallagher

Damien Gallagher

Damien Gallagher works with Top Quote, Ireland’s premier insurance providers, based in Co. Donegal. They strive to provide a competitive motor policy with the most comprehensive benefits package around. To provide the best possible service, they concentrate exclusively on high-quality car, van and home insurance services.