Tag Archives: DMV

New Products and Combined Approaches

At American Family Ventures, we think advances in product development will have a dramatic impact on the insurance industry. We’re also excited about new insurance experiences that combine “Insurance 2.0” distribution, structural and product innovation. We’ll discuss both subjects below.


Before we dive in, let’s define a few things. We consider an “insurance product” to be the entire financial protection experience. From a whole product perspective, this definition includes processes inherent in the creation and the use of such products, including methods of underwriting and activities like claims and policy administration.

We’re watching two product trends in particular:  behavioral disaggregation and the unbundling of policy time and coverages.

Behavioral Disaggregation

As the world becomes increasingly connected through mobile devices, sensors, networks and information sharing, new context becomes available for managing risk. Dynamic insurance products that react to comprehensive information on behavior will be a direct result of these increases in contextual information.

See Also: Insurance 2.0: How Distribution Evolves

The concept of contextual insurance is not new. In fact, the purpose of insurance underwriting is to segment and accurately price insurance using information about the applicant. Even behavior-based pricing is not a new concept. Since the 1950s, insurers have used access to DMV records to adjust rates in the event of speeding tickets or other traffic violations. However, the innovation we’re seeking goes a few steps further.

An insurance provider that accurately understands the discrete behaviors influencing the safety of an asset and its users could also offer novel and effective ways to protect both. Behavioral data could be generated through connected devices, more robust asset histories and inventory tracking, collaboration with the owner on risk mitigating activities and the like. Using enhanced access to relevant behavioral information, new products would offer increased customization, accessibility, frictionless coverage acquisition and live reconfiguration. Perhaps someday we’ll see dynamic, multi-factor insurance policies that continuously and automatically adjust to choices the policyholder makes.

Consider the following homeowners insurance example: A homeowner replaces an old fireplace with a new model that has important safety features. Of course, this fireplace is “connected.” As soon as the fireplace cloud tells the homeowner’s insurance carrier the new model is installed and active, the homeowner’s premium payment drops by 10%. Impressed with this outcome, the homeowner tells three of her neighbors about the product, and they promptly replace their own rickety fireplaces. As a result of the newly safe cul-de-sac , rates drop an incremental 3% for all residents in the neighborhood. Soon after, one of the neighbors is shocked to discover that his new model was incorrectly installed, creating a small gas leak that would become dangerous over time. Fortunately, his insurer, in coordination with the manufacturer, flags this issue and repairs the unit before it becomes a hazard. A week later, the three homeowners who originally purchased the product, dining out with their insurance savings, all decide to purchase water-leak-detection systems, for which they are promptly rewarded with an additional insurance discount.

We’re still in the early innings of behavior-based insurance, but, as you can see, its impacts are meaningful. Contextual data doesn’t have all the answers, but it will drive new insights and, perhaps more importantly, prevent losses.

Unbundling Policy Time and Coverages

Existing insurance products bundle coverages. For example, consider that a standard homeowners policy consists of four types of coverages:

  1. Coverage for the structure of your home
  2. Coverage for your personal belongings
  3. Liability protection
  4. Additional living expenses if your home is temporarily unlivable

Each of these coverage areas then insures against loss from a number of specific perils (fire, lightning, wind, etc.). Each also has distinct exclusions. These coverages are put together, often in very standard ways, to create homeowner’s insurance.

However, insurers might also unbundle coverages and fragment coverage time periods to create tailored coverage systems that react to the risks present (and absent) in various circumstances. These strategies subdivide coverage profile and duration into more relevant and accurate segments, offering more accurate pricing or supporting new forms of self-insurance.

Fragmenting coverage time can be accomplished with on-demand or transactional insurance. As we all witness large portions of our lives becoming managed services — transportation, home ownership, fitness — paying to be protected against loss only when specific risks are present or a unique event occurs is an increasingly useful option. This could imply securing insurance only when circumstances or behavior indicate need, or using broad, umbrella-type coverage for losses in everyday activities and ratcheting up coverage for specific types of risk.

For example, imagine an insurance service that uses access to a mobile calendar and other apps to offer timely insurance products based on daily activities. If your morning commute is in a Zipcar (that doesn’t drive itself… yet), you might be offered short-term, simplified personal auto insurance options before you leave. If you instead decide to walk to work that morning, you receive credit toward discounted health and life products. If you’re taking a long flight during inclement weather, you’re prompted with an offer to increase your term life insurance amount. If your job requires you to travel to an unsavory place, your employer is reminded to add kidnapping and ransom insurance to its existing commercial policy (yes, that exists).

Unbundling coverage can be done with a la carte policies. In using a la carte features, insurers can offer insureds more control over the assumption or transfer of risk and, in turn, greater capacity to segment and self-insure (alone or in groups) specific parts of an asset, incidents or perils. This allows for the personal assumption of precise risks by excluding them from coverage. This is accomplished today, in part, through the selection of deductible levels, but we think there are ways to push the concept further.

Of note, we believe the inverse of unbundling — “super-bundling”— is also quite powerful. This refers to insurance products that increase the scope of protection until the insured is no longer required to consider insurance at all. In essence, they abstract the idea of insurance from the buyer. Instead, as long as any obligations the customer owes the insurer (payments or data) are fulfilled, everything and anything is covered. Of course, this convenience is likely to carry additional costs.

These contrasting approaches to providing insurance offer distinct benefits. Unbundling offers maximum economic efficiency in exchange for increased engagement and complexity, whereas super-bundling offers maximum simplicity in exchange for decreased control and higher costs.

Additional Considerations and Questions

Balancing the interests of the individual with the interests of society and public policy is a key question surrounding product innovation. For example, assuming unbundling scenarios are economically viable for the individual, how do we ensure the presence of coverage for liability-related incidents and the protection of third parties?

Other questions that need answering as product innovation advances include:

  • How will privacy and data sharing be addressed in mutually beneficial and safe ways? Customers will expect value in exchange for sharing information about behavior, so data recipients must create the right incentives and will have to protect personal data vigilantly.
  • How does the unbundling of insurance consumption affect the way risk is aggregated and spread across large groups of people?
  • Will frequent, accessible and granular self-insurance create adverse selection issues?
  • Can unbundled customers effectively select risks to self-insure, or will people fall victim to the ludic fallacy, applying oversimplified statistical models to complex systems?
  • And, as with any product, what is the appropriate balance between customization and ease of use?


Insurance distribution and structural/product innovation support one another in ways that are both reactive and complex. As a result, they can be used in coordination to create entirely new insurance experiences.

One example we often discuss is the idea of “entire life” insurance. This is not the same as whole life insurance but rather describes a “super-bundled” risk management product offering the maximum amount of simplicity and flexibility to the insured. In short, entire life insurance would offer a single policy that captures information related to all of your daily needs (transportation, housing, health, travel, etc.) and wraps it into a dynamically priced instrument that indemnifies you against loss from anything bad that might happen. In contrast to the on-demand insurance, usage of entire life insurance is abstracted from the buyer. Instead, the policyholder has a single policy that represents the entire cost to insure that individual based on dynamically adjusted, minute-by-minute protection for all activities.

Such a product, if at all feasible, would require a substantial amount of behavioral data and insight. The makers of this product, at least at first, might also need to discover new approaches to capital raising and risk pooling to offer the product within the bounds of state and federal law. Finally, it stands to reason that a product so deeply integrated with other services and data sources might be sold most effectively via some form of digitally enhanced adviser or life concierge service. Think Jarvis for financial security.

See Also: P2P Start-Ups From Around the World

We also think that combinations like entire life create barriers to entry. If we revisit the simple Venn diagram from our first post, you can imagine a defensibility gradient, where increasingly challenging activities — from a technical, regulatory or human capital perspective — build on each other to create complex, difficult-to-replicate models and relationships.

Defensibility across three areas of Insurance 2.0

While the gradient diagram above portrays the center as the most difficult to replicate, it’s not hard to imagine the dark portion of the circle shifting based on the source of competition. In other words, it may be that, when comparing tech startups to insurance incumbents, barriers to entry are shifted toward the product, but when considering incumbent defensibility against market share erosion from incidental channels (competing directly with carriers), the gradient shifts towards distribution.


In the past few posts, we’ve offered some guesses on what the future holds for insurance. However, given the speed of change and complexity of the systems in play, we’ve surely missed things and made mistakes. So, instead of making internal forecasts that are precisely wrong, we opted to share our observations with you, in the hopes you can incorporate or transform these ideas into your own.

If you’re working on changing insurance in these or new ways, let us know!

A Misguided Decision on Driverless Cars

On first glance, the California Department of Motor Vehicles’ recent proposal to ban the testing and deployment of driverless cars seems to err on the side of caution.

On closer inspection, however, the DMV’s draft rules on autonomous vehicles rest on flawed assumptions and threaten to slow innovation that might otherwise bring enormous, time-critical societal benefits.

At issue is the requirement that DMV-certified “autonomous vehicle operators” are “required to be present inside the vehicle and be capable of taking control in the event of a technology failure or other emergency.” In other words, driverless cars will not be allowed on California roads for the foreseeable future.

One problem with the human operator requirement is that it mandates a faulty design constraint. As Donald Norman, the technology usability design expert, has noted, decades of scientific research and experience demonstrate “people are incapable of monitoring something for long periods and then taking control when an emergency arises.”

This has been Google’s direct experience with its self-driving car prototypes, too. As Astro Teller, head of Google[x], told a SXSW audience in early 2015: “Even though people had sworn up and down, ‘I”m going to pay so much attention,’ people do really stupid stuff when they’re driving. The assumption that humans could be a reliable back up for the system was a total fallacy!”

The ramifications are more than just theoretical or technical. The lives and quality of life of millions hang in the balance.

Americans were in more than six million car crashes last year, injuring 2.3 million people and killing 32,675. Worldwide, more than 50 million people were injured, and more than one million were killed. Human error caused more than 90% of those crashes.

It remains unclear whether semi-autonomous or driverless cars would better reduce human error and lower this carnage. Thus, it is important to encourage multiple approaches toward safer cars — as quickly as possible. Instead, California has slammed the brakes on the driverless approach.

Another major problem with the human-operator mandate is that it slows testing and development of systems aimed at providing affordable transportation to the elderly, handicapped or economically disadvantaged. Millions of Americans either cannot drive or cannot afford a car. This hurts their quality of life and livelihood.

Driverless cars could enable Uber-like, door-to-door mobility-on-demand services at a fraction of today’s transportation cost. This will require, however, efficient, low-cost vehicles that do not need (nor need to accommodate) relatively expensive human drivers. It also requires empty driverless cars to shuttle between passengers. The California DMV rules, as proposed, would not allow the testing or deployment of such vehicles or fleet services.

The immediate victim of California’s proposed rules is Google. Google’s self-driving car program is the furthest along in the driverless design approach that the new rules would rein in, and its current efforts are located around its headquarters in Mountain View, CA. Google’s attempt to field a fleet of prototype driverless cars (without steering wheels) would certainly be dashed.

Other companies’ efforts might be affected, too. Will Tesla owners, for example, need to get separate DMV certification to use enhanced versions of Tesla’s autopilot feature? How about GM owners with Super Cruise-equipped cars? How will these rules affect Apple’s car aspirations?

The longer-term victim is California.

Silicon Valley is becoming the epicenter of autonomous vehicle research. Not only are native companies like Google, Tesla and, reportedly, Apple investing heavily in this arena, but the race to develop the technology has compelled numerous traditional automakers to build their own Silicon Valley research centers.

If California regulators limit on-road testing and deployment, companies stretching the boundaries of driverless technology will inevitably shift their investments to more innovation-friendly states (or countries).

The proposed rules must now go through several months of public comment and review before they are finalized. California needs to take that opportunity to reconsider its course on driverless cars.

Assisted Driving Is Taking Over

The power of 35.

The Insurance Institute for Highway Safety (IIHS) estimates that automatic emergency braking and forward-collision warning features could curtail injury claims by as much as 35%. The California Department of Motor Vehicles estimates that in 35% of crashes the brakes were not applied. It is striking that these two numbers match.

These savings are not surprising. Automatic braking will avoid some accidents altogether. When an accident nevertheless occurs, automatic braking may greatly reduce the speed on impact. As a matter of simple physics, a reduction in speed results in an exponential reduction in the kinetic energy that must be absorbed in a collision. The formula, for those interested, is Kinetic Energy = 1/2mv2, where “m” is the mass of the vehicle and “v” is the vehicle’s velocity. Thus, a vehicle that collides at 30 mph has only one-fourth the kinetic energy of a vehicle that collides at 60 mph.

While automatic emergency braking and forward-collision warning are standard in some luxury cars and are available as options in many others, 10 automakers have agreed with the National Highway Traffic Administration and the IIHS to establish a time frame for making assisted driving features standard in all cars.

These are significant developments for insurers and for public policy makers.

For insurers, a 35% decrease in injury claims will result in a significant reduction in premium. This may be offset, at least in part, by an increase in the cost of repair for more sophisticated vehicles and the continuing increase in healthcare costs.

Public policy makers should contemplate the potential benefits of a 35% reduction in injuries and deaths because of assisted driving. At present, highway deaths in the U.S. account for 33,000 to 35,000 deaths per year (depending on how one correlates deaths to auto accidents). Over 10 years this is the equivalent to the population of some major cities-St. Louis, Minneapolis, Des Moines or the city of your choice. A 35% reduction would reduce deaths from 33,000 to 21,450. Every year, 11,550 more people would continue to go about their lives. Add a similar reduction in the more than 2.5 million injuries per year, and the public benefit is overwhelming.

These benefits only accrue as assisted driving features find their way into the fleet. This can be a slow process. It is estimated that electronic stability control, which has been available as an option for many years and has been mandatory since 2012, will not reach 95% penetration until 2029. This is because the average age of automobiles is a bit more than 11 years. Thus, anything that can hasten the adoption of these safety features (and others that are to come) benefits everyone.

Encouraging adoption directly implicates insurance. Auto insurance is one of the more expensive costs of owning a car. The cost of these safety features, either as an option or as a standard feature, is also an expense of owning a car. It is critical that savings from the lower frequency and severity of accidents be rapidly passed on to car owners in lower insurance rates, which will help offset the added cost and promote more rapid adoption.

Even when assisted driving features become standard, which will be some years in the future, the majority of the existing fleet may still be on the road for another 11 years or so. Passing substantial insurance savings to potential purchasers will make retiring the old heap more palatable.

Policy makers and regulators can play an important part in facilitating adoption of these safer cars. Laws and regulations that may impede the rapid distribution of insurance savings to insureds should be streamlined. Likewise, some driver-centric rating systems that may distort rates by artificially depressing the weight given to the safety of the vehicle should be reviewed.

Self-driving vehicles of the future have captured the attention of the public and the insurance industry. While many have been looking toward that day, enormous improvements in safety-critical technology are taking place right now. In a sense, the future is already here. Cars are taking over many safety-critical functions from their more fallible drivers. Insurers and policy makers must adjust.

Those interested in a lengthier treatment of this topic can read this article by Thomas Gage and Richard Bishop. And here is a Bloomberg article on a Boston Consulting Group study of the issue.

Tools for Fighting Fraud Come of Age

Insurance fraud, that ever-present nemesis of claims professionals, has a new opponent. A technology triple threat—the Internet, extensive and accessible databases and the pervasiveness of social media—has come of age, and the result is an increased exposure of workers’ compensation fraud and a rise in prosecutions.

As with many industries, the tools used in fighting fraud have evolved over the years, and today’s high-tech resources are completely different from the tools employed a mere 20 years ago.

In the pre-Internet era, employers and insurance industry professionals who suspected potential workers’ compensation fraud had limited, and often expensive, options to gather evidence. Even the initial paperwork was more cumbersome. The adjuster would first complete a hand-written referral form requesting investigative services, which would slowly pass through the fax machine to materialize in the investigator’s office. That’s much different from today’s data integration of claim systems with investigative partners, where a click of a button auto-fills the referral form, and the complete claim file is populated into the investigative company’s web-based case management system.

For surveillance conducted pre-Internet, the investigator would review the Thomas Brothers map, load the large VHS video camera and extra batteries in the van and drive to the subject’s last known residence to roll the dice on filming the correct person. Employers were not able to email photos of employees, and there were no online social networks to locate vacation photos and other important information.

Going From Print and Tape to Digital

Today’s technology allows those fighting fraud to conduct a more comprehensive pre-surveillance investigation than they could have been imagined just a few years ago.

Mapping technology provides a clear visual of the subject’s residence and surrounding neighborhood. This allows the investigator to create a detailed surveillance plan including routes, local and covert tail opportunities and other strategies. Online database searches, Department of Motor Vehicle records and social networking searches provide a plethora of information. Additional tools such as GPS tracking and video streaming also have improved the success rate of surveillance.

Today’s video cameras do not resemble their older brothers from the ’90s. The heavy cameras of the past were best used with a tripod to hold the weight, making quick maneuvers difficult. Getting out of the vehicle to obtain film from an on-foot pursuit was extremely challenging. A large duffle bag with a hole cut in the end for the lens was hard to keep clandestine approach. And covert cameras lacked the quality needed to prove identity. Today’s compact, powerful, digital HD video cameras provide high-quality video and fit comfortably in one hand. Additionally, current covert cameras are undetectable—the camera lens can easily be part of a hat, a shirt button or a keychain. Significantly, these tiny video cameras can capture clear footage almost on par with film obtained using a standard video camera.

In addition to the VHS video camera, the tools of the trade back then included a pager, a heavy cellular phone with a large antenna, a stack of phone books, a shoebox full of maps, several proven pretext scenarios and, most importantly, a Rolodex. Information that we now find on the Internet certainly was obtainable before the Internet-era, but it had to be acquired with different and often creative methods.

Digging for Information

While investigators today have Internet connection in their vehicles and can quickly conduct database and social network searches while onsite, the pre-Internet investigator’s most valuable tool was relationships, as information was shared by people, not technology. The investigator often had little information about the subject upon initiation of the investigation and gathered details the old school way—by digging.

Investigators could be found reviewing records at the voter registration office or scanning microfiche at the court to ascertain critical information. They spent a lot of time standing in lines at public agencies and searching through endless records stored in large ledgers, microfiche or index card catalogs. The information found in public records was invaluable—current and former addresses, real property data, encumbrances, marriage licenses, divorce records, birth certificates, bankruptcies, criminal records, traffic tickets, tax liens, civil lawsuits, evictions, business licenses, professional licenses and more.

While this information was vital, it was tedious work retrieving it, especially if the subject had a common name or a maiden name or aliases. Successful investigators had to be not only good at investigation, they also had to be successful at establishing connections to build a Rolodex of contacts. An effective investigator leveraged strategic connections to successfully and quickly gather information. Making connections with the people who worked in the records departments of courts, law enforcement agencies, recorder’s office, voter registration, licensing bureaus and the like, then gathering phone numbers that rang directly to desks, was essential to efficiently obtain vital information. Likewise, networking with fellow investigators in other areas to trade resources saves time.

Gaining Public and Private Details

Public records have always been a critical source for identifying information, financial information, business records, criminal records, civil litigation records and the like. However, those records did not provide the personal insight that we can find on the Internet.

Today, a search of social networking can yield information, insight and often photographic evidence of a subject’s habits, activities, interests, schedules and behavior. If obtained legally and ethically and stored appropriately for chain of custody, this online evidence can be submitted to medical providers and the Workers’ Compensation Board and used as evidence in Superior Court, including in criminal cases of workers’ compensation fraud.

To learn personal information pre-Internet, one needed connections and creative sleuthing talent. Delivery companies, utilities, contest and sweepstakes promoters, magazines, debt collection agencies, credit reporting agencies, retail and catalog ordering companies often made additional revenue by selling their customers’ personal data.

Today, calling people to obtain information has been replaced with Internet searches. Several companies provide database services, including instant access to credit reporting agencies, public records, utility company records and other information. Now, what previously took many hours, if not days, of phone calls and in-person searches and cost a significant amount is accomplished in mere seconds.

Pretexting—the practice of presenting oneself as someone else to obtain private information—is one strategy that has carried over from pre-Internet days. Indeed, it was all but impossible to conduct a successful investigation without it before the Internet, and it remains useful today. Pretexting is legal in many states, and investigators have historically used it to obtain needed information. A successful pretext call results in a willingness by the subject or other source to share information and, if done correctly, leaves no footprint behind, so the people are never aware they spoke to an investigator.

The successful investigator uses a combination of old and new to navigate today’s complex world of insurance fraud. Pretexting still works in some cases, relationships always will matter and technology continues to evolve and to provide even better data. Workers’ compensation fraud will, unfortunately, always be with us. However, old, new and yet-to-be developed techniques will bring that fraud to light, resulting in a better system for us all.

See Darlene’s interview here.