Tag Archives: interfaces

12 Issues Inhibiting the Internet of Things

While the Internet of Things (IoT) accounts for approximately 1.9 billion devices today, it is expected to be more than 9 billion devices by 2018—roughly equal to the number of smartphones, smart TVs, tablets, wearable computers and PCs combined. But, for the IoT to scale beyond early adopters, it must overcome specific challenges within three main categories: technology, privacy/security and measurement.

Following are 12 hurdles that are hampering the growth of the IoT:

1. Basic Infrastructure Immaturity

IoT technology is still being explored, and the required infrastructure must be developed before it can gain widespread adoption. This is a broad topic, but advancement is needed across the board in sensors themselves, sensor interfaces, sensor-specific micro controllers, data management, communication protocols and targeted application tools, platforms and interfaces. The cost of sensors, especially more sophisticated multi-media sensors, also needs to shrink for usage to expand into mid-market companies.

2. Few Standards

Connections between platforms are now only starting to emerge. (E.g., I want to turn my lights on when I walk in the house and turn down the temperature, turn on some music and lock all my doors – that’s four different ecosystems, from four different manufacturers.) Competing protocols will create demand for bridge devices. Some progress is emerging in the connected home with Apple and Google announcements, but the same must happen in the enterprise space.

3. Security Immaturity

Many products are built by smaller companies or leverage open source environments that do not have the resources or time to implement the proper security models. A recent study shows that 70% of consumer-oriented IoT devices are vulnerable to hacking. No IoT-specific security framework exists yet; however, the PCI Data Security Standard may find applicability with IoT, or the National Institute of Standards and Technology (NIST) Risk Management Guide for ITS may.

4. Physical Security Tampering

IoT endpoints are often physically accessible by the very people who would want to meddle with their results: customers interfering with their smart meter, for example, to reduce their energy bill or re-enable a terminated supply.

5. Privacy Pitfalls

Privacy risks will arise as data is collected and aggregated. The collation of multiple points of data can swiftly become personal information as events are reviewed in the context of location, time, recurrence, etc.

6. Data Islands

If you thought big data was big, you haven’t see anything yet. The real value of the IoT is when you overlay data from different things — but right now you can’t because devices are operating on different platforms (see #2). Consider that the connected house generates more than 200 megabytes of data a day, and that it’s all contained within data silos.

7. Information, but Not Insights

All the data processed will create information, eventually intelligence – but we aren’t there yet. Big data tools will be used to collect, store, analyze and distribute these large data sets to generate valuable insights, create new products and services, optimize scenarios and so on. Sensing data accurately and in timely ways is only half of the battle. Data needs to be funneled into existing back-end systems, fused with other data sources, analytics and mobile devices and made available to partners, customers and employees.

8. Power Consumption and Batteries

50 billion things are expected to be connected to the Internet by 2020 – how will all of it be powered? Battery life and consumption of energy to power sensors and actuators needs to be managed more effectively. Wireless protocols and technologies optimized for low data rates and low power consumption are important. Three categories of wireless networking technologies are either available or under development that are better suited for IoT, including personal area networks, longer-range sensors and mesh networks and application-specific networks.

9. New Platforms with New Languages and Technologies

Many companies lack the skills to capitalize on the IoT. IoT requires a loosely coupled, modular software environment based on application programming interfaces (APIs) to enable endpoint data collection and interaction. Emerging Web platforms using RESTful APIs can simplify programming, deliver event-driven processes in real time, provide a common set of patterns and abstractions and enable scale. New tools, search engines and APIs are emerging to facilitate rapid prototyping and development of IoT applications.

10. Enterprise Network Incompatibility

Many IoT devices aren’t manageable as part of the enterprise network infrastructure. Enterprise-class network management will need to extend into the IoT-connected endpoints to understand basic availability of the devices as well as manage software and security updates. While we don’t need the same level of management access as we do to more sophisticated servers, we do need basic, reliable ways to observe, manage and troubleshoot. Right now, we have to deal with manual and runaway software updates. Either there’s limited or no automated software updates or there are automatic updates with no way to stop them.

11. Device Overload

Another issue is scale. Enterprises are used to managing networks of hundreds or thousands of devices. The IoT has the potential to increase these numbers exponentially. So the ways we currently procure, monitor, manage and maintain will need to be revisited.

12. New Communications and Data Architectures

To preserve power consumption and drive down overall cost, IoT endpoints are often limited in storage, processing and communications capabilities. Endpoints that push raw data to the cloud allow for additional processing as well as richer analytics by aggregating data across several endpoints. In the cloud, a “context computer” can combine endpoint data with data from other services via APIs to smartly update, reconfigure and expand the capabilities of IoT devices.

The IoT will be a multi-trillion industry by 2020. But entrepreneurs need to clear the hurdles that threaten to keep the IoT from reaching its full potential.

This article was co-written with Daniel Eckert. The article draws on PwC’s 6th Annual Data IQ Survey. The article first appeared on LinkedIn.

Disjointed Reinsurance Systems: A Recipe for Disaster

Insurers’ numerous intricate reinsurance contracts and special pool arrangements, countless policies and arrays of transactions create a massive risk of having unintended exposure. The inability to ensure that each insured risk has the appropriate reinsurance program associated with it is a recipe for disaster.

Having disjointed systems—a combination of policy administration system (PAS) and spreadsheets, for example—or having systems working in silos are sure ways of having risks fall through the cracks. The question is not if it will happen but when and by how much.

Beyond excessive risk exposure, the risks are many: claims leakage, poor management of aging recoverables and lack of business intelligence capabilities. There’s also the likelihood of not being able to track out-of-compliance reinsurance contracts. For instance, if a reinsurer requires certain exclusion in the policies it reinsures and the direct writer issues the policy without the exclusion, then the policy is out of compliance, and the reinsurer may deny liability.

The result is unreliable financial information for trends, profitability analysis and exposure, to name a few.

Having fragmented solutions and manual processes is the worst formula when it comes to audit trails. This is particularly troubling in an age of stringent standards in an increasingly internationally regulated industry. Integrating the right solution will help reduce risks to an absolute minimum.

Consider vendors offering dedicated and comprehensive systems as opposed to policy administration system vendors, which may simply offer “reinsurance modules” as part of all-encompassing systems. Failing to pick the right solution will cost the insurer frustration and delays by attempting to “right” the solution through a series of customizations. This will surely lead to cost overruns, a lengthy implementation and an uncertain outcome. An incomplete system will need to be customized by adding missing functions.

Common system features a carrier should look out for are:
  • Cession treaties and facultative management
  • Claims and events management
  • Policy management
  • Technical accounting (billing)
  • Bordereaux/statements
  • Internal retrocession
  • Assumed and retrocession operations
  • Financial accounting
  • AP/AR
  • Regulatory reporting
  • Statistical reports
  • Business intelligence
Study before implementing

Picking the right solution is just the start. Implementing a new solution still has many pitfalls. Therefore, the first priority is to perform a thorough and meticulous preliminary study.

The study is directed by the vendor, similar to an audit through a series of meetings and interviews with the different stakeholders: IT, business, etc. It typically lasts one to three weeks depending on the complexity of the project. A good approach is to spend a half-day conducting the scheduled meeting(s) and the other half drafting the findings and submitting them for review the following day.

The study should at least contain the following:

  • A detailed report on the company’s current reinsurance management processes.
  • A determination of potential gaps between the carrier reinsurance processes and the target solution.
  • A list of contracts and financial data required for going live.
  • Specifications for the interfaces.
  • Definitions of the data conversion and migration strategy.
  • Reporting requirements and strategy.
  • Detailed project planning and identification of potential risks.
  • Repository requirements.
  • Assessment and revision of overall project costs.
Preliminary study/(gap analysis) sample:

1. Introduction
  • General introduction and description of project objectives and stakeholders
  • What’s in and out of scope
2. Description of current business setting

3. Business requirements

  • Cession requirements
  • Assumed and retrocession requirements
4. Systems Environment Topics
  • Interfaces/hardware and software requirements
5. Implementation requirements
6. System administration
  • Access, security, backups
7. Risks, pending issues and assumptions
8. Project management plan

The preliminary study report must be submitted to each stakeholder for review and validation as well as endorsement by the head of the steering committee of the insurance company before the start of the project. If necessary, the study should be revised until all parts are adequately defined. Ideally, the report should be used as a road map by the carrier and vendor.

All project risks and issues identified at this stage will be incorporated into the project planning. It saves much time and money to discover them before the implementation phase. One of the main reasons why projects fail is poor communication. Key people on different teams need to actively communicate with each other. There should be at  least one person from each invested area—IT, business and upper management must be part of a well-defined steering committee.

A clear-cut escalation process must be in place to tackle any foreseeable issues and address them in a timely manner.

A Successful Implementation Process
Key areas and related guidelines that are essential to successfully carry out a project.

Data cleansing
Before migration, an in-depth data scrubbing or cleansing is recommended. This is the process of amending or removing data derived from the existing applications that is erroneous, incomplete, inadequately formatted or replicated. The discrepancies discovered or deleted may have been originally produced by user-entry errors or by corruption in transmission or storage.

Data cleansing may also include actions such as harmonization of data, which relates to identifying commonalities in data sets and combining them into a single data component, as well as standardization of data, which is a means of changing a reference data set to a new standard—in other words, use of standard codes.

Data migration

Data migration pertains to the moving of data between the existing system (or systems) and the target application as well as all the measures required for migrating and validating the data throughout the entire cycle. The data needs to be converted so that it’s compatible with the reinsurance system before the migration can take place.

It’s a mapping of all the data with business rules and relevant codes attached to it; this step is required before the automatic migration can take place.

An effective and efficient data migration effort involves anticipating potential issues and threats as well as opportunities, such as determining the most suitable data-migration methodology early in the project and taking appropriate measures to mitigate potential risks. Suitable data migration methodology differs from one carrier to another based on its particular business model.

Analyze and understand the business requirements before gathering and working on the actual data. Thereafter, the carrier must delineate what needs to be migrated and how far back. In the case of long-tail business, such as asbestos coverage, all the historical data must be migrated. This is because it may take several years or decades to identify and assess claims.

Conversely, for short-tail lines, such as property fire or physical auto damage, for which losses are usually known and paid shortly after the loss occurs, only the applicable business data is to be singled out for migration.

A detailed mapping of the existing data and system architecture must be drafted to isolate any issues related to the conversion early on. Most likely, workarounds will be required to overcome the specificities or constraints of the new application. As a result, it will be crucial to establish checks and balances or guidelines to validate the quality and accuracy of the data to be loaded.

Identifying subject-matter experts who are thoroughly acquainted with the source data will lessen the risk of missing undocumented data snags and help ensure the success of the project. Therefore, proper planning for accessibility to qualified resources at both the vendor and insurer is critical. You’ll also need experts in the existing systems, the new application and other tools.

Interfaces

Interfaces in a reinsurance context relate to connecting to the data residing in the upstream system, or PAS, to the reinsurance management system, plus integrating the reinsurance data to other applications, such as the general ledger, the claims system and business intelligence tools.

Integration and interfaces are achieved by exchanging data between two different applications but can include tighter mechanisms such as direct function calls. These are synchronous communications used for information retrieval. The synchronous request is made using a direct function call to the target system.

Again, choosing the right partner will be critical. A provider with extensive experience in developing interfaces between primary insurance systems, general ledgers, BI suites and reinsurance solutions most likely has already developed such interfaces for the most popular packages and will have the know-how and best practices to develop new ones if needed. This will ensure that the process will proceed as smoothly as possible.

After the vendor (primarily) and the carrier carry out all essential implementation specifics to consolidate the process automation and integrations required to deliver the system, look to provide a fully deployable and testable solution ready for user acceptance testing in the reinsurance system test environment.

Formal user training must take place beforehand. It needs to include a role-based program and ought not to be a “one-size-fits-all” training course. Each user group needs to have a specific training program that relates to its particular job functions.

The next step is to prepare for a deployment in production. You’ll need to perform a number of parallel runs of the existing reinsurance solutions and the new reinsurance system and be able to replicate each one and reach the same desired outcome before going live.

Now that you’ve installed a modern, comprehensive reinsurance management system, you’ll have straigh-tthrough automated processing with all the checks and balances in place. You will be able to reap the benefits of a well-thought-out strategy paired with an appropriate reinsurance system that will lead to superior controls, reduced risk and better financials. You’ll no longer have any dangerous hidden “cracks” in your reinsurance program.
This article first appeared in Carrier Management magazine.