Tag Archives: wireless

The Problem With Telematics

When I attended the Insurance Telematics USA conference in Chicago earlier this month, I expected to see much more enthusiasm. I first wrote about Progressive’s venture into telematics all the way back in the late 1990s, and technology has improved so much since then that the telematics industry would surely be bragging about its breakout into the mainstream or at least predicting that one was imminent. The idea just makes so much sense: being able to track cars so that insurance risks can be determined very precisely for individual drivers, while even providing feedback that improves driving.

While the telematics technology is, in fact, stunning and while there are reasons for great optimism, what I found was not an industry brimming with confidence. I found an industry still searching for the right business model.

Until the industry solves that problem, progress will remain limited.

The Problem

The current approach to telematics is generally to install a device in a customer’s car for six months and have it relay the driver’s actions back to the insurer for evaluation. At the end of the six months, the device is uninstalled, and the insurer tells the driver what sort of discount, if any, she will receive based on her driving habits. A key point is that the issue at hand only concerns discounts; insurers have promised that they won’t raise rates if they find that someone is a worse risk than expected.

Think about the expense that goes into that model: manufacturing the telematics devices; installing and uninstalling them; and transmitting lots of data over a wireless network on which the insurer has to buy bandwidth.

Now think about the benefits. The prospect of a discount has attracted enough good drivers that, if all telematics-based auto policies were rolled into one company, it would be close to being in the top 10 among auto insurers in the U.S. Ptolemus, a strategy consulting firm, said there are 4.4 million cars in the U.S. carrying usage-based insurance (UBI). That’s a lot of cars. But there are 253 million cars and trucks in the U.S., so the market penetration of UBI is just 1.7%. Even in the main ballroom of the conference, full of ardent proponents, only about 5% raised their hands when asked if they had UBI.

Many customers turn out to not be that focused on discounts. They would prefer receiving free access to other services, such as roadside assistance — but what services customers want, how to bundle those services, etc. has yet to be worked out.

Even if some new package of free services drove 10 times as many people to buy UBI auto policies, telematics wouldn’t do much to make roads safer. Insurers are offering incentives to a self-selected group of drivers who are already among the safest on the road but, because insurers have decided they can’t raise rates for bad drivers, won’t be doing anything about the people who cause a huge portion of the accidents and, thus, the costs.

The current business model works — barely. The costs are too high, the offering to consumers isn’t right and the benefits to insurers are too low.

The Potential

Help is on the way from two main sources, which I have seen drive innovation in industry after industry since I started following the world of information technology almost 30 years ago. One source is what I think of as the power of “free.” The other is the power of a platform.

The Power of “Free”

The behavioral economist Dan Ariely has done all sorts of experiments about the power of free and found that it is almost magic. For instance, if someone does volunteer work and you decide to thank him by paying him a little, he will likely cut back on the work he does for you or even stop. Ariely reasons that people evaluate paid work in a hard-nosed way — how many hours do I work, how hard or skilled is the work, how much do others get paid for this work, etc.? — and evaluate volunteer work based on altruistic measures, such as the quality of a cause. If you have people evaluate the return from their free work on a paid scale, you’ll lose them. Similarly, he says, you can get people to do all kinds of uneconomic things if remove a paltry cost and make something free.

The power of free computing and communication has driven the upheaval of business over the past 30 years, spawning the wide adoption of the Internet, smartphones, etc. and all the business models that have come along with them. (Obviously, we still pay for computers and storage devices, but they are essentially free by comparison with where they were in the 1980s — a gigabyte of memory, which cost $300,000 then, costs about a penny today. Communication costs have gone way down and are headed toward something approaching free, even though telecom and cable companies will fight a rear guard action as long as they can.)

Now the power of free is coming to telematics, because the cost of acquiring information on drivers is heading toward zero.

In the short term, that will be because of smartphone apps. Although some say the data they generate isn’t quite as precise as that from sensors in cars, the apps are good enough for the vast majority of uses, and they cost roughly nothing. There isn’t any need to make a dongle for the car and install and uninstall it. Nor is there a need for the insurer to buy a wireless data plan for the car. The app can do most of the analysis on the phone and just send modest amounts of data back to the insurer, using the driver’s wireless plan.

In the long term, things will get even better as “connected cars” move into the market. These cars, already connected wirelessly to the Internet, will automatically generate the kind of information that insurers need. Insurers will be able to know what kind of a driver someone is at the moment she applies, rather than having to guess and then wait six months to know for sure.

The Power of a Platform

From the 1950s through the early 1980s, when IBM controlled the computer industry, the pace of innovation was glacial by today’s standards. Part of the reason was that the pace let IBM milk maximum profits, but part was also because IBM had to produce what software types would call the “full stack.” IBM had to develop the semiconductor technology that allowed for faster processors; design those processors; manufacture the processors; design and manufacture just about all the support chips, especially memory; assemble the mainframes; code the operating system; and generate the major pieces of application software. Everything had to come together, from one company, before the next step in innovation happened.

When the PC came along in 1981, with its open architecture, innovation became a free-for-all. Intel owned the chip, and Microsoft the operating system, but everything else was fair game. Companies flooded into the market, innovating in all kinds of smart ways, especially with applications such as the spreadsheet, and the market took off.

The telematics market is well on its way to making the transition from the IBM mainframe days to the open days of the PC and beyond. Initially, Progressive had to pull an IBM and invent the whole process for telematics from beginning to end. Now, an ecosystem has developed, and all sorts of companies are free to innovate at any part of the process.

Verisk has announced an exchange, to which car makers and insurers can contribute data on drivers and from which they can pull information. GM has said it will contribute data from its OnStar system, and GM has one million 4G-connected cars on the road in the U.S. So, the need for everyone to generate their own data is going away.

The Weather Channel (represented on the panel I moderated at the conference) has information that can correlate bad weather very precisely with driving behavior — the company is even working to aggregate information on the speed at which cars’ wipers are operating, to understand in a very granular way just how severe a storm is in a certain spot.

Many other companies are innovating in new parts of the ecosystem, rather than just focusing on pricing risks better or acquiring customers. For instance, my friend and colleague Stefan Heck, a former director at McKinsey with whom I wrote a book (along with Matt Rogers) about how innovation can overcome resource scarcity, just unveiled an extremely ambitious approach to improving safety, through a company called Nauto. (A writeup in re/code is here.) Agero made a presentation at the conference about how telematics can speed claims processing and cut costs while making customers happy — essentially, the telematics system notifies the insurer instantly about an accident, so the insurer can provide whatever reassurance and help is necessary, while also sending someone to the scene so fast that it can take control of the process, rather than deferring to, among others, municipal towing companies.

The Future

The power of free and the power of a platform ensure that, before too many years go by, the costs for telematics will drop drastically and the benefits to insurers and customers will increase greatly. That still leaves insurers with the task of figuring out the right offering to customers, but, in my experience, once costs get low enough and lots of innovators get interested, experimentation eventually produces the right business model.

The question to me is: Who will that winner be?

12 Issues Inhibiting the Internet of Things

While the Internet of Things (IoT) accounts for approximately 1.9 billion devices today, it is expected to be more than 9 billion devices by 2018—roughly equal to the number of smartphones, smart TVs, tablets, wearable computers and PCs combined. But, for the IoT to scale beyond early adopters, it must overcome specific challenges within three main categories: technology, privacy/security and measurement.

Following are 12 hurdles that are hampering the growth of the IoT:

1. Basic Infrastructure Immaturity

IoT technology is still being explored, and the required infrastructure must be developed before it can gain widespread adoption. This is a broad topic, but advancement is needed across the board in sensors themselves, sensor interfaces, sensor-specific micro controllers, data management, communication protocols and targeted application tools, platforms and interfaces. The cost of sensors, especially more sophisticated multi-media sensors, also needs to shrink for usage to expand into mid-market companies.

2. Few Standards

Connections between platforms are now only starting to emerge. (E.g., I want to turn my lights on when I walk in the house and turn down the temperature, turn on some music and lock all my doors – that’s four different ecosystems, from four different manufacturers.) Competing protocols will create demand for bridge devices. Some progress is emerging in the connected home with Apple and Google announcements, but the same must happen in the enterprise space.

3. Security Immaturity

Many products are built by smaller companies or leverage open source environments that do not have the resources or time to implement the proper security models. A recent study shows that 70% of consumer-oriented IoT devices are vulnerable to hacking. No IoT-specific security framework exists yet; however, the PCI Data Security Standard may find applicability with IoT, or the National Institute of Standards and Technology (NIST) Risk Management Guide for ITS may.

4. Physical Security Tampering

IoT endpoints are often physically accessible by the very people who would want to meddle with their results: customers interfering with their smart meter, for example, to reduce their energy bill or re-enable a terminated supply.

5. Privacy Pitfalls

Privacy risks will arise as data is collected and aggregated. The collation of multiple points of data can swiftly become personal information as events are reviewed in the context of location, time, recurrence, etc.

6. Data Islands

If you thought big data was big, you haven’t see anything yet. The real value of the IoT is when you overlay data from different things — but right now you can’t because devices are operating on different platforms (see #2). Consider that the connected house generates more than 200 megabytes of data a day, and that it’s all contained within data silos.

7. Information, but Not Insights

All the data processed will create information, eventually intelligence – but we aren’t there yet. Big data tools will be used to collect, store, analyze and distribute these large data sets to generate valuable insights, create new products and services, optimize scenarios and so on. Sensing data accurately and in timely ways is only half of the battle. Data needs to be funneled into existing back-end systems, fused with other data sources, analytics and mobile devices and made available to partners, customers and employees.

8. Power Consumption and Batteries

50 billion things are expected to be connected to the Internet by 2020 – how will all of it be powered? Battery life and consumption of energy to power sensors and actuators needs to be managed more effectively. Wireless protocols and technologies optimized for low data rates and low power consumption are important. Three categories of wireless networking technologies are either available or under development that are better suited for IoT, including personal area networks, longer-range sensors and mesh networks and application-specific networks.

9. New Platforms with New Languages and Technologies

Many companies lack the skills to capitalize on the IoT. IoT requires a loosely coupled, modular software environment based on application programming interfaces (APIs) to enable endpoint data collection and interaction. Emerging Web platforms using RESTful APIs can simplify programming, deliver event-driven processes in real time, provide a common set of patterns and abstractions and enable scale. New tools, search engines and APIs are emerging to facilitate rapid prototyping and development of IoT applications.

10. Enterprise Network Incompatibility

Many IoT devices aren’t manageable as part of the enterprise network infrastructure. Enterprise-class network management will need to extend into the IoT-connected endpoints to understand basic availability of the devices as well as manage software and security updates. While we don’t need the same level of management access as we do to more sophisticated servers, we do need basic, reliable ways to observe, manage and troubleshoot. Right now, we have to deal with manual and runaway software updates. Either there’s limited or no automated software updates or there are automatic updates with no way to stop them.

11. Device Overload

Another issue is scale. Enterprises are used to managing networks of hundreds or thousands of devices. The IoT has the potential to increase these numbers exponentially. So the ways we currently procure, monitor, manage and maintain will need to be revisited.

12. New Communications and Data Architectures

To preserve power consumption and drive down overall cost, IoT endpoints are often limited in storage, processing and communications capabilities. Endpoints that push raw data to the cloud allow for additional processing as well as richer analytics by aggregating data across several endpoints. In the cloud, a “context computer” can combine endpoint data with data from other services via APIs to smartly update, reconfigure and expand the capabilities of IoT devices.

The IoT will be a multi-trillion industry by 2020. But entrepreneurs need to clear the hurdles that threaten to keep the IoT from reaching its full potential.

This article was co-written with Daniel Eckert. The article draws on PwC’s 6th Annual Data IQ Survey. The article first appeared on LinkedIn.