Tag Archives: EDI

The Future of Asset Management

Insurance companies have long faced obstacles in optimizing their asset portfolios as the balance sheets of highly regulated, complex operating companies providing various financial products. Traditional one-size-fits-all methods to solve agency problems and hesitancy toward adopting new technologies have held them back. Over- or underutilization of quant models as well as the absence of a clear process to manage multiple tradeoffs have also prevented insurers from achieving optimal returns and customization.

On the other hand, investment professionals and third-party managers have avoided insurance company asset management due to strict and multiple layers of industry regulation, and their complex capital/liability structure.

The pressure to get better returns, along with structural changes and emerging financial technologies, has caused insurers and asset managers to reassess their strategies and look for ways to transform their old ways of investing. That’s where the practice of Enterprise Driven Investing can help.

See also: A New Way of Thinking on Assets  

Enterprise Driven Investing (EDI) for insurers is a business management process that attempts to address several pitfalls, improve decision-making and enhance results. The goal of EDI is to achieve a high level of portfolio customization in the most financially efficient manner. This is done through a four-step process.

Step 1: Establish the full set of financial variables and set priorities

EDI begins by establishing and prioritizing the complete set of financial considerations. The fact that there are multiple business factors that affect each other is the principle characteristic that distinguishes Enterprise Driven Investing from Liability Driven Investing and creates this first step. These considerations include form of ownership, liabilities from a global encyclopedia of risk, actuarially complex policy terms and product options, taxes, liquidity requirements, colliding capital objectives, affiliate structures, competing rating agencies and several regulatory regimes that are rarely coordinated and, in combination, are the most complex in business. EDI’s first step captures the complete set of these variables and then challenges the board and senior management team to establish those that are primary, secondary and less relevant to their organization.

Step 2: Design a portfolio objective, and related performance measures, from the financial priorities

Insurance asset management in any form is as much a design challenge as an investment one. Careful design of the primary objective is the gatekeeper to successful EDI. Portfolio objectives for these entities are no different than for other portfolios in that they are two-dimensional measurements of risk and return. The definition of each, however, has financial attributes linked to an operating company. Return can be total, net investment income (NII), cash flow, a combination or something else entirely. Risk can be portfolio volatility, CVaR, TVaR, economic shortfall, Solvency II capital charges, etc. Even the best selection will have shortcomings. A poorly conceived objective alone can offset, entirely, the talents of a high-performing investment team. Success in the design phase will occur if four guide rails are in place:

  1. Company-specific customization – Investment objectives should be dictated by market segment based on lines of business, ownership structure, scale and domicile.
  2. Clarity of timeframe – This should be longer term but explicit (e.g. three years).
  3. Proper selection and calibration of constraints. Sensitivity analysis is the radar that is used to navigate through changing circumstances and costs.
  4. Establishment of investment skill metrics – Legitimate performance evaluation of both internal and external managers remains one of the most challenging and increasingly important design requirements in insurance asset management.

Step 3: Establish a strategy to meet the portfolio objective with full consideration of the impact on factors not directly expressed in this objective

A portfolio objective expresses an insurer’s most important return and risk measurements. The challenge is balancing the portfolio objective with other dynamic financial parameters. One response to this challenge has been portfolio optimization with multiple constraints. While helpful, relying on a single output from these analyses has weaknesses. For example, it introduces black box risk and naïve precision. It also fails to consider important variables and masks the relative significance of various assumptions and financial relationships.

As a practical decision-making tool, EDI avoids these weaknesses by highlighting the collateral impact on key trip wires from changes motivated by the portfolio objective. While all companies estimate the changes in portfolio strategy against the portfolio objective, many do not grasp the shadow-pricing sensitivity of the objective to variation in constraints or, conversely, are blind to the impact of rebalancing on the full set of financial variables. For many companies, this sensitivity is both substantial and unknown. Managing sensitivity, to self-imposed limits, in particular, advances EDI from a passive to an active philosophy.

Step 4: Explore ways to improve tradeoffs through higher order changes

EDI begins by creating a comprehensive set of company-specific financial considerations, then establishes priorities (including the portfolio objective), and forms investment strategy after highlighting relationships between these variables regarding direction and leverage, through sensitivity analysis. In its most advanced form, creative reengineering resets trade-offs to a more favorable state and forms new ones. A few categories for these ideas are summarized below, but the opportunities are by no means confined to these topics:

  • Improved capital and tax efficiency – Developing strategies to increase returns, reduce volatility, improve portfolio diversification and reduce capital charges.
  • Bifurcation of assets based on line of business volatility rather than asset class volatility – Organizing the balance sheet by reserves and capital based on the volatility lines of the business.
  • New approaches to asset/liability management (ALM) – Structured finance experts should create bespoke products that improve ALM, in the same way they have used their expertise for capital efficiency.

See also: How to Manage Strategic Relations 

While EDI is the future of insurance company investing, it is also a framework for all investors to manage the tradeoffs of hyper-customization and pure investment efficiency. Also, because EDI explicitly recognizes, rather than avoids, the full set of enterprise variables, it represents a major advancement in balance sheet management from LDI and one-size-fits-all quantitative methods. As such, EDI reveals otherwise hidden paths to significantly better results. Finally, as a stable, but dynamic business management process, EDI principles accommodate the full spectrum of emerging financial theories and technologies.

parties

In Third Parties We (Mis)trust?

Technology is transforming trust. Never before has there been a time when it’s been easier to start a distant geographical relationship. With a credible website and reasonable products or services, people are prepared to learn about companies half a world away and enter into commerce with them.

Society is changing radically when people find themselves trusting people with whom they’ve had no experience, e.g. on eBay or Facebook, more than with banks they’ve dealt with their whole lives.

Mutual distributed ledgers pose a threat to the trust relationship in financial services.

The History of Trust

Trust leverages a history of relationships to extend credit and benefit of the doubt to someone. Trust is about much more than money; it’s about human relationships, obligations and experiences and about anticipating what other people will do.

In risky environments, trust enables cooperation and permits voluntary participation in mutually beneficial transactions that are otherwise costly to enforce or cannot be enforced by third parties. By taking a risk on trust, we increase the amount of cooperation throughout society while simultaneously reducing the costs, unless we are wronged.

Trust is not a simple concept, nor is it necessarily an unmitigated good, but trust is the stock-in-trade of financial services. In reality, financial services trade on mistrust. If people trusted each other on transactions, many financial services might be redundant.

People use trusted third parties in many roles in finance, for settlement, as custodians, as payment providers, as poolers of risk. Trusted third parties perform three roles:

  • validate – confirming the existence of something to be traded and membership of the trading community;
  • safeguard – preventing duplicate transactions, i.e. someone selling the same thing twice or “double-spending”;
  • preserve – holding the history of transactions to help analysis and oversight, and in the event of disputes.

A ledger is a book, file or other record of financial transactions. People have used various technologies for ledgers over the centuries. The Sumerians used clay cuneiform tablets. Medieval folk split tally sticks. In the modern era, the implementation of choice for a ledger is a central database, found in all modern accounting systems. In many situations, each business keeps its own central database with all its own transactions in it, and these systems are reconciled, often manually and at great expense if something goes wrong.

But in cases where many parties interact and need to keep track of complex sets of transactions they have traditionally found that creating a centralized ledger is helpful. A centralized transaction ledger needs a trusted third party who makes the entries (validates), prevents double counting or double spending (safeguards) and holds the transaction histories (preserves). Over the ages, centralized ledgers are found in registries (land, shipping, tax), exchanges (stocks, bonds) or libraries (index and borrowing records), just to give a few examples.

The latest technological approach to all of this is the distributed ledger (aka blockchain aka distributed consensus ledger aka the mutual distributed ledger, or MDL, the term we’ll stick to here). To understand the concept, it helps to look back over the story of its development:

 1960/’70s: Databases

The current database paradigm began around 1970 with the invention of the relational model, and the widespread adoption of magnetic tape for record-keeping. Society runs on these tools to this day, even though some important things are hard to represent using them. Trusted third parties work well on databases, but correctly recording remote transactions can be problematic.

One approach to remote transactions is to connect machines and work out the lumps as you go. But when data leaves one database and crosses an organizational boundary, problems start. For Organization A, the contents of Database A are operational reality, true until proven otherwise. But for Organization B, the message from A is a statement of opinion. Orders sit as “maybe” until payment is made, and is cleared past the last possible chargeback: This tentative quality is always attached to data from the outside.

1980/’90s: Networks

Ubiquitous computer networking came of age two decades after the database revolution, starting with protocols like email and hitting its full flowering with the invention of the World Wide Web in the early 1990s. The network continues to get smarter, faster and cheaper, as well as more ubiquitous – and it is starting to show up in devices like our lightbulbs under names like the Internet of Things. While machines can now talk to each other, the systems that help us run our lives do not yet connect in joined-up ways.

Although in theory information could just flow from one database to another with your permission, in practice the technical costs of connecting databases are huge. Worse, we go back to paper and metaphors from the age of paper because we cannot get the connection software right. All too often, the computer is simply a way to fill out forms: a high-tech paper simulator. It is nearly impossible to get two large entities to share our information between them on our behalf.

Of course, there are attempts to clarify this mess – to introduce standards and code reusability to help streamline business interoperability. You can choose from EDI, XMI-EDI, JSON, SOAP, XML-RPC, JSON-RPC, WSDL and half a dozen more standards to “assist” your integration processes. The reason there are so many standards is because none of them finally solved the problem.

Take the problem of scaling collaboration. Say that two of us have paid the up-front costs of collaboration and have achieved seamless technical harmony, and now a third partner joins our union, then a fourth and a fifth … by five partners, we have 13 connections to debug, by 10 partners the number is 45. The cost of collaboration keeps going up for each new partner as they join our network, and the result is small pools of collaboration that just will not grow. This isn’t an abstract problem – this is banking, this is finance, medicine, electrical grids, food supplies and the government.

A common approach to this quadratic quandary is to put somebody in charge, a hub-and-spoke solution. We pick an organization – Visa would be typical – and all agree that we will connect to Visa using its standard interface. Each organization has to get just a single connector right. Visa takes 1% off the top, making sure that everything clears properly.

But while a third party may be trusted, it doesn’t mean it is trustworthy. There are a few problems with this approach, but they can be summarized as “natural monopolies.” Being a hub for others is a license to print money for anybody that achieves incumbent status. Visa gets 1% or more of a very sizeable fraction of the world’s transactions with this game; Swift likewise.

If you ever wonder what the economic upside of the MDL business might be, just have a think about how big that number is across all forms of trusted third parties.

2000/’10s: Mutual Distributed Ledgers

MDL technology securely stores transaction records in multiple locations with no central ownership. MDLs allow groups of people to validate, record and track transactions across a network of decentralized computer systems with varying degrees of control of the ledger. Everyone shares the ledger. The ledger itself is a distributed data structure held in part or in its entirety by each participating computer system. The computer systems follow a common protocol to add transactions. The protocol is distributed using peer-to-peer application architecture. MDLs are not technically new – concurrent and distributed databases have been a research area since at least the 1970s. Z/Yen built its first one in 1995.

Historically, distributed ledgers have suffered from two perceived disadvantages; insecurity and complexity. These two perceptions are changing rapidly because of the growing use of blockchain technology, the MDL of choice for cryptocurrencies. Cryptocurrencies need to:

  • validate – have a trust model for time-stamping transactions by members of the community;
  • safeguard – have a set of rules for sharing data of guaranteed accuracy;
  • preserve – have a common history of transactions.

If faith in the technology’s integrity continues to grow, then MDLs might substitute for two roles of a trusted third party, preventing duplicate transactions and providing a verifiable public record of all transactions. Trust moves from the third party to the technology. Emerging techniques, such as, smart contracts and decentralized autonomous organizations, might in future also permit MDLs to act as automated agents.

A cryptocurrency like bitcoin is an MDL with “mining on top.” The mining substitutes for trust: “proof of work” is simply proof that you have a warehouse of expensive computers working, and the proof is the output of their calculations! Cryptocurrency blockchains do not require a central authority or trusted third party to coordinate interactions, validate transactions or oversee behavior.

However, when the virtual currency is going to be exchanged for real-world assets, we come back to needing trusted third parties to trade ships or houses or automobiles for virtual currency. A big consequence may be that the first role of a trusted third party, validating an asset and identifying community members, becomes the most important. This is why MDLs may challenge the structure of financial services, even though financial services are here to stay.

Boring ledgers meet smart contracts

MDLs and blockchain architecture are essentially protocols that can work as well as hub-and-spoke for getting things done, but without the liability of a trusted third party in the center that might choose to exploit the natural monopoly. Even with smaller trusted third parties, MDLs have some magic properties, the same agreed data on all nodes, “distributed consensus,” rather than passing data around through messages.

In the future, smart contracts can store promises to pay and promises to deliver without having a middleman or exposing people to the risk of fraud. The same logic that secured “currency” in bitcoin can be used to secure little pieces of detached business logic. Smart contracts may automatically move funds in accordance with instructions given long ago, like a will or a futures contract. For pure digital assets there is no counterparty risk because the value to be transferred can be locked into the contract when it is created, and released automatically when the conditions and terms are met: If the contract is clear, then fraud is impossible, because the program actually has real control of the assets involved rather than requiring trustworthy middle-men like ATM machines or car rental agents. Of course, such structures challenge some of our current thinking on liquidity.

Long Finance has a Zen-style koan, “if you have trust I shall give you trust; if you have no trust I shall take it away.” Cryptocurrencies and MDLs are gaining more and more trust. Trust in contractual relationships mediated by machines sounds like science fiction, but the financial sector has profitably adapted to the ATM machine, Visa, Swift, Big Bang, HFT and many other innovations. New ledger technology will enable new kinds of businesses, as reducing the cost of trust and fixing problems allows new kinds of enterprises to be profitable. The speed of adoption of new technology sorts winners from losers.

Make no mistake: The core generation of value has not changed; banks are trusted third parties. The implication, though, is that much more will be spent on identity, such as Anti-Money-Laundering/Know-Your-Customer backed by indemnity, and asset validation, than transaction fees.

A U.S. political T-shirt about terrorists and religion inspires a closing thought: “It’s not that all cheats are trusted third parties; it’s that all trusted third parties are tempted to cheat.” MDLs move some of that trust into technology. And as costs and barriers to trusted third parties fall, expect demand and supply to increase.

How Bureaucracy Drives WC Costs

Workers’ compensation is one of the most highly regulated lines of insurance. Every form filed and every payment transaction is an opportunity for a penalty. Claims can stay open for 30 years or longer, leading to thousands of transactions on a single claim. Each state presents different sets of compliance rules for payers to follow. This bureaucracy is adding significant cost to the workers’ compensation system, but is it improving the delivery of benefits to injured workers?

Lack of Uniformity

Workers’ compensation is regulated at the state level, which means every state has its own set of laws and rules governing the delivery of indemnity and medical benefits to injured workers. This state-by-state variation also exists in the behind-the-scenes reporting of data. Most states now require some level of electronic data interchange (EDI) from the payers (carriers or self-insured employers). There is no common template between the states; therefore carriers must set up separate data feeds for each state. This is made even more complex when you factor in the multiple sources from which payers must gather this data for their EDI reporting. Data sources include employers, bill review and utilization review vendors. The data from all these vendors must be combined into a single data feed to the states. If states change the data reporting fields, each of the vendors in the chain must also make changes to their feeds.

Variation also exists in the forms that must be filed and notices that must be posted in the workplaces. This means that payers must constantly monitor and update the various state requirements to ensure they stay in full compliance with the regulations.

Unnecessary Burden

Much of the workers’ compensation compliance efforts focus on the collection of data, which is ultimately transmitted to the states. The states want this information to monitor the system and ensure it is operating correctly, but is all this data necessary? Some states provide significant analytical reports on their workers’ compensation systems, but many do little with the data that they collect. In a world concerned about cyber risk, collecting and transmitting claims data creates a significant risk of a breach. If the data is not being used by the states, the risk associated with collecting and transmitting it seems unnecessary.

Another complication is that there are multiple regulators involved in the system for oversight in each jurisdiction. Too often, this means payers have to provide the same information to multiple parties because information sent to the state Department of Insurance is not shared with the state Division of Workers’ Compensation and vice versa.

Some regulation is also outdated based on current technology. Certain states require the physical claims files to be handled within that state. However, with many payers now going paperless, there are no physical claims files to provide. Other states require checks to be issued from a bank within those states. Electronic banking makes this requirement obsolete.

How Is This Driving Costs?

All payers have a significant amount of staffing and other resources devoted to compliance efforts. From designing systems to gathering and entering data, this is a very labor-intensive process. There have not been any studies on the actual costs to the system from these compliance efforts, but they easily equate to millions of dollars each year.

States also impose penalties for a variety of things, including late filing of forms and late and improper payment of benefits. The EDI process makes it possible for these penalties to be automated, but that issue raises the question of the purpose of the penalties altogether. These penalties are issued on a strict liability basis. In other words, either the form was filed in a timely manner or it was not. A payer could be 99% compliant on one million records, but they would be automatically penalized for the 1% of records that were incorrect. In this scenario, are the penalties encouraging compliance, or are they simply a source of revenue for the state? A fairer system would acknowledge where compliance efforts are being made. Rather than penalize every payer for every error, use the penalties for those that fall below certain compliance thresholds (say, 80% or 90% compliance).

The laws themselves can be vague and open to interpretation, which leads to unnecessary litigation expenses. Terms such as “reasonable” and “usual and customary” are intentionally vague, and often states will not provide further definition of these terms.

How Can We Improve?

One of the goals of workers’ compensation regulations is to ensure that injured workers are paid benefits in a timely manner at the correct rate and that they have access to appropriate medical treatment. There was a time when payers had offices located in most states, with adjusters handling only that state. Now, with most payers utilizing multi-state adjusters, payers must be constantly training and educating their adjusters to ensure that they understand all of the nuisances of the different states that they handle.

The ability to give input to regulators is also invaluable, and payers should seek opportunities to engage with organizations to create positive change. Groups such as the International Association of Industrial Accident Boards and Commissions (IAIABC) and the Southern Association of Workers’ Compensation Administrators (SAWCA) provide the opportunity for workers’ compensation stakeholders to interact with regulators on important issues and also provides the opportunity to seek uniformity where it makes sense (EDI, for example).

There needs to be better transparency and communication between all parties in the rule-making process so that regulators have a better understanding of the impact these rules have on payers and the effort required to achieve compliance.

Developing standards in technology would be helpful for both the payers and the states. If your systems cannot effectively communicate with the other systems, you cannot be efficient. Upgrading technology across the industry, particularly on the regulatory side, has to become a priority.

Finally, we need to give any statutory reforms time to make an impact before changing them again because the constant change adds to confusion and drives costs. In the last 10 years, there have been more than 9,000 bills introduced in various jurisdictions related to workers’ compensation. Of those, about 1,000 have actually been turned into law. People expect that these reforms will produce the desired results immediately, when in reality these things often take time to reach their full impact.

These issues were discussed in depth during an “Out Front Ideas With Kimberly and Mark” webinar on Feb. 9, 2016. View the archived webinar at http://www.outfrontideas.com/archives/.

14 Things to Know About ACA Software

If you are a large employer or employee benefit broker, chances are you have spent a lot of time trying to determine the best ACA 6056 reporting and compliance solution. At ACA Reporting Service, we do not sell software – rather, full-service reporting. However, we have researched almost a dozen different Affordable Care Act employer reporting and compliance vendors, and we thought we would pass along what we learned.

Beginning Questions to Ask Yourself

As an employer or benefit broker, this is how the ACA software question breaks down for you.

1). Some employers will have their online enrollment (benefits administration) and payroll with the same vendor. In those cases, as long as the client is willing to pay for it, it will likely to make sense to just perform this required ACA reporting of IRS forms 1094 and 1095 with that vendor.

2). Some employers will not have an outside benefits administration vendor or payroll. They do everything in-house. For these employers, there is going to be a lot of ACA work to be done, and obviously you will need a stand-alone solution.

3). Finally, you have some employers that have payroll and benefits administration with different vendors. This would include the scenario where one of these functions is performed in-house. In these cases, you will either need to consolidate both payroll and benefit plan elections with one vendor, or you will need a stand-alone solution.

Basic conclusion: If you are an employee benefit broker with various types of employer clients, we don’t see a scenario where you can get away without having a stand-alone ACA software solution to help your clients meet their 6056 Affordable Care Act employer reporting requirements.

What do you need to know in evaluating ACA stand-alone software vendors? Here are some questions to ask yourself:

1). Security? What if all the Social Security numbers of your client’s employees were stolen? Can you imagine the fallout? Many of the systems we reviewed were severely lacking in terms of security. What level of encryption is being used for the data?

2). Branded to your company? Many different ACA reporting vendors offer the ability to brand a portal to your company and let your employer clients log in.

3). Is the system mainly a benefits administration system? This can make an extreme difference. Will this add costs for the ACA reporting module? Also, with many benefits administration systems, there are additional charges for EDI (electronic data interface – where election data is sent to insurance carriers). Will additional fees apply with this new ACA reporting?

4). Is the ACA reporting solution even built yet? Many, MANY of the ACA reporting module demos we sat in on were from vendors that do not even have the software built yet.

5). How long will your data be stored? The IRS has said that audits will begin starting in about 18 months of filings, and that can last for 7 years total. If you do not have a methodology to get back to your data at the time of inquiry, you are stuck.

6). Is your vendor set up to file with the IRS? Did they just lie to you and say yes to that question? As of the writing of this blog, no one is set up to file with the IRS electronically (efile) for forms 1094 and 1095. The IRS has literally just issued the guidelines to begin getting started with this.

7). Variable hour tracking? Do you need variable hour tracking to determine eligibility? For many employers, a simple spreadsheet will do the trick. Many vendors have quite robust capabilities in this area, and for some employers this will definitely make sense.

8). The “Gotcha Moment.” This comes at the end of a great presentation when you’re told there is an additional charge of $3 to $5 per employee to file the forms with the IRS. Generally, these costs will render noncompetitive whatever solution you were just shown.

9). Robust ACA logic? We cannot tell you how important this is! If you have spent as much time looking at these forms as we have (especially in terms of form 1095c lines 14, 15 and 16), you will know that performing this reporting is MUCH MORE than just uploading a spreadsheet. The codes for these lines are based on logic. Most systems do not have this logic built into their system, so it will be up to you as an employer or benefit broker to figure this out. For most employee benefit agencies, you can count on this little “bug” shutting down your operations in January.

What if you decide to just file them incorrectly? When your largest client has 100 employees bring them letters from the IRS, you will then realize this was a very bad idea.

Also, without robust logic built into the system, there will be no accommodation for situations such as someone terminating in November/December and then electing COBRA in January. The codes for these situations are different.

10). Are forms stored for future access and corrections? Bottom line – there are going to be issues with the reporting from time to time. Do you have the ability to go back into the system and create a new/corrected 1094 or 1095 form on behalf of the employee? Many systems that rely solely on a census upload would require you to basically start over to make this one fix. OR, your staff can just manually create one in .pdf, which will take a lot of time.

11). Do you have to pay for the whole system up front, or are there monthly options? Do you need to commit to multiple years with the software vendor? Do you have to pay continually for the solution or only once? Are there implementation costs? Are there separate fees for the IRS form file reporting and all other functions in the system?

12). Can the employee elections be uploaded via census, or do you need to type it all in?

13). Will the vendor have adequate customer support between Jan. 1 and Jan. 31 so that you can KNOW you will be able to get all the work done?

14). Do you want to just let the payroll vendor do the work for your client? Do you really want to recommend that your client have another function performed by someone who wants nothing more than to take your business away from you?

. . . OK, that is enough! We hope you find this helpful.