Tag Archives: third parties

parties

In Third Parties We (Mis)trust?

Technology is transforming trust. Never before has there been a time when it’s been easier to start a distant geographical relationship. With a credible website and reasonable products or services, people are prepared to learn about companies half a world away and enter into commerce with them.

Society is changing radically when people find themselves trusting people with whom they’ve had no experience, e.g. on eBay or Facebook, more than with banks they’ve dealt with their whole lives.

Mutual distributed ledgers pose a threat to the trust relationship in financial services.

The History of Trust

Trust leverages a history of relationships to extend credit and benefit of the doubt to someone. Trust is about much more than money; it’s about human relationships, obligations and experiences and about anticipating what other people will do.

In risky environments, trust enables cooperation and permits voluntary participation in mutually beneficial transactions that are otherwise costly to enforce or cannot be enforced by third parties. By taking a risk on trust, we increase the amount of cooperation throughout society while simultaneously reducing the costs, unless we are wronged.

Trust is not a simple concept, nor is it necessarily an unmitigated good, but trust is the stock-in-trade of financial services. In reality, financial services trade on mistrust. If people trusted each other on transactions, many financial services might be redundant.

People use trusted third parties in many roles in finance, for settlement, as custodians, as payment providers, as poolers of risk. Trusted third parties perform three roles:

  • validate – confirming the existence of something to be traded and membership of the trading community;
  • safeguard – preventing duplicate transactions, i.e. someone selling the same thing twice or “double-spending”;
  • preserve – holding the history of transactions to help analysis and oversight, and in the event of disputes.

A ledger is a book, file or other record of financial transactions. People have used various technologies for ledgers over the centuries. The Sumerians used clay cuneiform tablets. Medieval folk split tally sticks. In the modern era, the implementation of choice for a ledger is a central database, found in all modern accounting systems. In many situations, each business keeps its own central database with all its own transactions in it, and these systems are reconciled, often manually and at great expense if something goes wrong.

But in cases where many parties interact and need to keep track of complex sets of transactions they have traditionally found that creating a centralized ledger is helpful. A centralized transaction ledger needs a trusted third party who makes the entries (validates), prevents double counting or double spending (safeguards) and holds the transaction histories (preserves). Over the ages, centralized ledgers are found in registries (land, shipping, tax), exchanges (stocks, bonds) or libraries (index and borrowing records), just to give a few examples.

The latest technological approach to all of this is the distributed ledger (aka blockchain aka distributed consensus ledger aka the mutual distributed ledger, or MDL, the term we’ll stick to here). To understand the concept, it helps to look back over the story of its development:

 1960/’70s: Databases

The current database paradigm began around 1970 with the invention of the relational model, and the widespread adoption of magnetic tape for record-keeping. Society runs on these tools to this day, even though some important things are hard to represent using them. Trusted third parties work well on databases, but correctly recording remote transactions can be problematic.

One approach to remote transactions is to connect machines and work out the lumps as you go. But when data leaves one database and crosses an organizational boundary, problems start. For Organization A, the contents of Database A are operational reality, true until proven otherwise. But for Organization B, the message from A is a statement of opinion. Orders sit as “maybe” until payment is made, and is cleared past the last possible chargeback: This tentative quality is always attached to data from the outside.

1980/’90s: Networks

Ubiquitous computer networking came of age two decades after the database revolution, starting with protocols like email and hitting its full flowering with the invention of the World Wide Web in the early 1990s. The network continues to get smarter, faster and cheaper, as well as more ubiquitous – and it is starting to show up in devices like our lightbulbs under names like the Internet of Things. While machines can now talk to each other, the systems that help us run our lives do not yet connect in joined-up ways.

Although in theory information could just flow from one database to another with your permission, in practice the technical costs of connecting databases are huge. Worse, we go back to paper and metaphors from the age of paper because we cannot get the connection software right. All too often, the computer is simply a way to fill out forms: a high-tech paper simulator. It is nearly impossible to get two large entities to share our information between them on our behalf.

Of course, there are attempts to clarify this mess – to introduce standards and code reusability to help streamline business interoperability. You can choose from EDI, XMI-EDI, JSON, SOAP, XML-RPC, JSON-RPC, WSDL and half a dozen more standards to “assist” your integration processes. The reason there are so many standards is because none of them finally solved the problem.

Take the problem of scaling collaboration. Say that two of us have paid the up-front costs of collaboration and have achieved seamless technical harmony, and now a third partner joins our union, then a fourth and a fifth … by five partners, we have 13 connections to debug, by 10 partners the number is 45. The cost of collaboration keeps going up for each new partner as they join our network, and the result is small pools of collaboration that just will not grow. This isn’t an abstract problem – this is banking, this is finance, medicine, electrical grids, food supplies and the government.

A common approach to this quadratic quandary is to put somebody in charge, a hub-and-spoke solution. We pick an organization – Visa would be typical – and all agree that we will connect to Visa using its standard interface. Each organization has to get just a single connector right. Visa takes 1% off the top, making sure that everything clears properly.

But while a third party may be trusted, it doesn’t mean it is trustworthy. There are a few problems with this approach, but they can be summarized as “natural monopolies.” Being a hub for others is a license to print money for anybody that achieves incumbent status. Visa gets 1% or more of a very sizeable fraction of the world’s transactions with this game; Swift likewise.

If you ever wonder what the economic upside of the MDL business might be, just have a think about how big that number is across all forms of trusted third parties.

2000/’10s: Mutual Distributed Ledgers

MDL technology securely stores transaction records in multiple locations with no central ownership. MDLs allow groups of people to validate, record and track transactions across a network of decentralized computer systems with varying degrees of control of the ledger. Everyone shares the ledger. The ledger itself is a distributed data structure held in part or in its entirety by each participating computer system. The computer systems follow a common protocol to add transactions. The protocol is distributed using peer-to-peer application architecture. MDLs are not technically new – concurrent and distributed databases have been a research area since at least the 1970s. Z/Yen built its first one in 1995.

Historically, distributed ledgers have suffered from two perceived disadvantages; insecurity and complexity. These two perceptions are changing rapidly because of the growing use of blockchain technology, the MDL of choice for cryptocurrencies. Cryptocurrencies need to:

  • validate – have a trust model for time-stamping transactions by members of the community;
  • safeguard – have a set of rules for sharing data of guaranteed accuracy;
  • preserve – have a common history of transactions.

If faith in the technology’s integrity continues to grow, then MDLs might substitute for two roles of a trusted third party, preventing duplicate transactions and providing a verifiable public record of all transactions. Trust moves from the third party to the technology. Emerging techniques, such as, smart contracts and decentralized autonomous organizations, might in future also permit MDLs to act as automated agents.

A cryptocurrency like bitcoin is an MDL with “mining on top.” The mining substitutes for trust: “proof of work” is simply proof that you have a warehouse of expensive computers working, and the proof is the output of their calculations! Cryptocurrency blockchains do not require a central authority or trusted third party to coordinate interactions, validate transactions or oversee behavior.

However, when the virtual currency is going to be exchanged for real-world assets, we come back to needing trusted third parties to trade ships or houses or automobiles for virtual currency. A big consequence may be that the first role of a trusted third party, validating an asset and identifying community members, becomes the most important. This is why MDLs may challenge the structure of financial services, even though financial services are here to stay.

Boring ledgers meet smart contracts

MDLs and blockchain architecture are essentially protocols that can work as well as hub-and-spoke for getting things done, but without the liability of a trusted third party in the center that might choose to exploit the natural monopoly. Even with smaller trusted third parties, MDLs have some magic properties, the same agreed data on all nodes, “distributed consensus,” rather than passing data around through messages.

In the future, smart contracts can store promises to pay and promises to deliver without having a middleman or exposing people to the risk of fraud. The same logic that secured “currency” in bitcoin can be used to secure little pieces of detached business logic. Smart contracts may automatically move funds in accordance with instructions given long ago, like a will or a futures contract. For pure digital assets there is no counterparty risk because the value to be transferred can be locked into the contract when it is created, and released automatically when the conditions and terms are met: If the contract is clear, then fraud is impossible, because the program actually has real control of the assets involved rather than requiring trustworthy middle-men like ATM machines or car rental agents. Of course, such structures challenge some of our current thinking on liquidity.

Long Finance has a Zen-style koan, “if you have trust I shall give you trust; if you have no trust I shall take it away.” Cryptocurrencies and MDLs are gaining more and more trust. Trust in contractual relationships mediated by machines sounds like science fiction, but the financial sector has profitably adapted to the ATM machine, Visa, Swift, Big Bang, HFT and many other innovations. New ledger technology will enable new kinds of businesses, as reducing the cost of trust and fixing problems allows new kinds of enterprises to be profitable. The speed of adoption of new technology sorts winners from losers.

Make no mistake: The core generation of value has not changed; banks are trusted third parties. The implication, though, is that much more will be spent on identity, such as Anti-Money-Laundering/Know-Your-Customer backed by indemnity, and asset validation, than transaction fees.

A U.S. political T-shirt about terrorists and religion inspires a closing thought: “It’s not that all cheats are trusted third parties; it’s that all trusted third parties are tempted to cheat.” MDLs move some of that trust into technology. And as costs and barriers to trusted third parties fall, expect demand and supply to increase.

How to Manage Claims Across Silos

The long-minimized and largely untapped synergy between casualty claims and benefit programs may offer opportunities for both industries.

Some argue that these worlds are just too different and distinct to bring together, whether through simple alignment or partial to full integration. Managers are often more comfortable in their own functional areas, and sometimes crossing over can stretch expertise and focus. Fundamentally, however, claims are claims.

There’s been a shift in thinking and a growing interest in a more collaborative, aligned and even fully integrated services approach – one that takes many forms but that at its core incorporates a more combined strategy from date of incident through claim closure. The targeted goals for this approach are:

  • Ensuring an appropriate employee experience throughout the life of the claim
  • Targeting and delivering optimal outcomes
  • Minimizing the cost of risk associated with the reasons employees are under medical care or unable to contribute productively to their employer’s mission

Shared Goals

On its face, the value of collaboration seems obvious. From both an employee benefits and risk management perspective, providing care for the individual is of the utmost importance. One of the main objectives is ensuring the right outcomes, which includes leveraging the basic skills of investigation, verification, documentation and equitable resolution that are common between these two realms.

The nuances and distinctions that exist between them are not insignificant, but the key goals are the same – caring for people under medically related distress (regardless of source), minimizing disruptions to workforce productivity and closing claims efficiently and effectively with fairness to all parties and their respective goals and objectives.

Although these objectives have varying levels of importance in each field, they are fundamental to process effectiveness in both. This is not to say that there aren’t peculiar and unique aspects of each that require certain expertise and skills to achieve specific goals.

However, while blending skill requirements among a common group of claims professionals can be challenging, it is not rocket science. Defining and filling positions to enable successful claims handling in both worlds is eminently doable. The biggest hurdle may in fact be the necessary collaboration between these typically distinct functional areas and their leaders.

Many employers are already effectively managing employee injury and disease exposures. There are discernible trends emerging toward fewer silos and more performance-oriented measurements that are focused on short- and long-term strategies. Those companies taking a more collaborative approach can benefit from key elements such as:

  • Compassionate care that puts employee interests first
  • Integrated reporting and measurement across departments
  • Robust analytics that result in prescriptive actions with impact
  • Innovative tools targeted to specific process opportunity areas
  • A more holistic focus on the care of affected employees
  • The over-arching goal of a healthy, productive workforce

So whether or not you have direct responsibility for both functional areas, I urge you to lead the charge that would leverage this opportunity for the benefit of your organization.

Third Parties Pose Problems With Cyber

In today’s cyber world, business is done digitally. Trusted cyber relationships between partners must be formed to effectively conduct business and stay at the forefront of innovation and customer service. Having these trusted partnerships comes with a major drawback, however.

Look at it from this perspective: If your organization is the target of a malicious actor, yet they find your defenses too difficult to penetrate, the attacker can use a partner company to find a way in. Depending on the difficulty, the attackers could target multiple third parties in an attempt to gain access to your network.

The important factor to keep in mind here is that just because your organization may have top-notch security practices in place, it does not mean your partners do, and they can be targeted for their valuable insider access to your systems.

Related story: Third-party vendors are the weak links in cybersecurity

Third-party companies, no matter how trivial they may seem to your everyday operations, need to be thoroughly vetted. If they are given secure insider access as part of doing business with your organization, their systems must be reviewed and assessed for security vulnerabilities. The adage, “you’re only as strong as your weakest link,” could not be more true when it comes to third-party vulnerabilities.

Coming to grips with risk

Partners may think of themselves as unlikely targets, but even your HVAC vendor could be creating a gaping hole in your security network that malicious actors may use to gain access to your sensitive information.

For example, financial enterprises have extremely large networks of third-party vendors and partners, from payment processors and auditors to Internet providers and other financial institutions. Being able to map your third parties’ public Internet space and network presence allows you to identify indicators of compromise and risk that paint an accurate depiction of your partners’ potential attack surface.

When we think of potential targets for hacking, we naturally think of big companies or government agencies-organizations that have large volumes of critical and sensitive data. But because these organizations typically have the funds and resources to implement sophisticated security, they are usually not the weak link when it comes to an attack.

Because these organizations cannot be easily accessed, malicious actors adjust their attack strategies to use alternate paths to their desired goal-less secured partners with privileged access. Once a vulnerable company is compromised, its trusted access into other partners allows malicious actors to bypass security controls with exploits that didn’t work previously. Adversaries now are free to roam the connected partner networks, essentially undetected.

Dealing with the problem

The moral here is that insider threats don’t necessarily have to come from within an organization. Trusted third parties, once compromised, create significant security risks to sensitive data. Organizations must look beyond their own defensive perimeters and consider monitoring their partners to better understand their complete attack surface-especially large and complex organizations in which new services are frequently delivered on outward-facing infrastructures.

Understanding the complete attack surface not only provides the intelligence to prevent abuse, but it provides insight into how an attacker may view a path of attack. Additionally, gaining insight into third-party partners, vendors and suppliers is crucial in creating an informed and dynamic risk management program.

Most organizations are busy enough dealing with their own IT infrastructure, so double-checking the risks associated with their partners may not be at the top of their priority list. However, in today’s cyber threat landscape, if you don’t take into account the security posture of your partners, you will never be able to truly mitigate your risk and are leaving gaps in your defenses for anyone to access your critical information.

This article was written by Jason Lewis. Lewis is the chief collection and intelligence officer at LookingGlass. Lewis is a network analyst who has technology initiatives in the private and public sectors.

7 Imperatives for Moving Into the Cloud

For property and casualty insurance carriers, growth is hard-fought in an environment of compressed margins, regulatory scrutiny, increased competition and customer expectations for anywhere/anytime service. Add unsteady economic conditions, low interest rates that decrease investment income and catastrophic losses from significant events such as Hurricane Sandy into the mix, and insurers are finding that their tried-and-true business methodologies that worked well pre-2008 are in desperate need of a facelift. Growth is especially challenging for insurance carriers with inflexible legacy technology systems, as well as small and mid-size carriers that lack the resources to make the product and operational changes they need to remain relevant and profitable.

Insurance carriers must navigate an environment that rewards nimbleness and flexibility, but to do so requires that insurers modernize their current systems and processes. Consider the example of bringing a new product to market. At most insurers, the process may take six months or more, with a price tag reaching seven figures. By the time the product is ready to launch, the dynamics in the market have shifted, or perhaps a new regulation has been legislated. The insurer has two equally unappealing choices: Launch the product as is and never realize a return on investment, or delay launch and retool the product, increasing the R&D price tag and losing potential revenue and market share.

There is a better way: Updating legacy systems with flexible and scalable Software as a Service (SaaS) computing capabilities allows P&C insurers to rapidly capitalize on opportunities and support growth. This article presents seven imperatives for the P&C insurance industry based on industry research and analysis, and outlines how a SaaS implementation can address each imperative.

IMPERATIVE 1: INCREASE SPEED-TO-MARKET 

In an Accenture survey of insurance industry professionals, more than seven of 10 (72%) respondents indicated that it takes their organization six months or more to launch a major product. In today’s constantly changing environment, six months is a long time indeed, and it’s likely that the market looks different than when product development began. However, insurers that are able to rapidly offer innovative products and services through multiple channels can take advantage of shifts in the market and exploit the slowness of competitors. Today, “slow and steady” doesn’t win the race.

Compared with legacy system-based product development, which requires coding, scripting and testing, a SaaS infrastructure by design incorporates more nimble and configurable software, significantly reducing development time and eliminating the cost of hiring a vendor or consultant to make coding changes. In addition, SaaS provides rapid provisioning of live and test environments to further increase speed-to-market. Lastly, SaaS requires minimal investment in hardware, software and personnel. Insurers can use a pre-configured infrastructure to reduce development costs by more than 80% over comparable legacy systems, according to Donald Harrell, senior vice president of marine, exploration and production for Liberty International Underwriters. This, in turn, reduces the risk for product launches.

IMPERATIVE 2: QUICKLY RESPOND TO MARKET AND COMPETITIVE CHANGES

Those insurers not able to turn on a dime may be in trouble because so many of their competitors are preparing to invest in technologies and processes that will help them design, underwrite and distribute products and services more quickly. More than 80% of insurance CEOs are planning to increase investment in technology, and more than 60% plan to develop their capacity for innovation. Innovation must continue after product launch, and SaaS allows insurers to retool products as market drivers dictate.

The ability to revamp an existing product is particularly attractive to small or mid-size insurers launching products to a relatively small target market. With SaaS, insurers are able to bring niche products to market that would otherwise not deliver enough ROI to justify the investment. Likewise, if a product is not profitable, an insurer can make changes and quickly reconfigure the product rather than being forced to offer an unprofitable or marginally profitable product because it’s too costly to make changes.

Insurers can also more effectively price products. SaaS is charged on a subscription or consumption basis, so costs are more closely aligned with the revenue being generated by the new product.

IMPERATIVE 3: REDUCE COSTS TO MAINTAIN PROFITABILITY

As the U.S. economy slowly improves, P&C profitability is starting to improve as well. However, there is little cause for celebration. Fitch Ratings warns insurers that the current pricing cycle may be running out of steam, forcing insurers to cut expense levels to maintain profitability. Now is the time for insurers to put in place cost-saving strategies. With a SaaS infrastructure, insurers can innovate and offer new products and services without incurring capital expenses.

Rather than implement an expensive technology infrastructure, SaaS allows insurers to leverage preconfigured infrastructure and reduce IT resource requirements, staffing and professional services fees. In fact, SaaS up-front costs are typically less than 20% of the development costs of legacy systems. SaaS pricing models have also matured, giving insurers access to a variety of bundled and unbundled pricing options.

IMPERATIVE 4: AUTOMATE AND STREAMLINE UNDERWRITING

A survey of insurance professionals by FirstBest Systems found that 82% of respondents believe that their insurer’s underwriters spend less than half of their time actually underwriting, with the majority of underwriter time spent on data collection and administrative tasks. Insurers understand that giving underwriters the automation tools they need to do their jobs effectively is key to improved underwriting, but many believe that the technology is problematic, with 81% citing lack of data integration as limiting underwriting productivity. In contrast to legacy underwriting systems, SaaS allows insurers to easily incorporate rules to automate the underwriting process and increase underwriting ratios and revenues.

SaaS also allows for streamlined data integration as opposed to off-the-shelf packages that often need extensive modification, thus eliminating a major stumbling block to optimal productivity for underwriters.

IMPERATIVE 5: SUPPORT NEW DELIVERY CHANNELS

Mobile technology continues to be top-of-mind for many carriers, with more than 60% planning to add new mobile capabilities for policyholders and agents. Notes Novarica partner Matthew Josefowicz, “As the use of smartphones and especially tablets displaces the use of desktops and laptops in more areas of personal and professional life, support for these platforms is becoming critical to insurers’ abilities to communicate electronically across the value chain.” The problem for carriers is that legacy systems were not designed to run on mobile devices. However, SaaS, with its more modern coding, is able to provide both a better user interface and operational efficiency for smartphones and tablets. SaaS allows insurers to distribute products through a variety of new channels (e.g., banks, car dealerships) that would not be possible with legacy systems.

Creating and recreating websites and portals quickly and inexpensively means that insurers can more readily compete with “disrupters” that use a direct-to-consumer model. Insurers can design multiple portals for different geographies, languages and associations in near-real time. Deloitte reiterates the importance of mobile and other delivery channels for insurers: “No one can afford to take their distribution systems for granted. More insurers are likely to grow bolder in exploring alternative channels to capture greater market share, catering to the needs and preferences of different segments while cutting frictional costs.”

IMPERATIVE 6: COLLABORATE WITH THIRD PARTIES

Insurers are increasingly relying on third parties for a variety of integration services, including regulatory compliance, sophisticated data analysis, geo-location capabilities for risk assessments and risk ratings for more accurate underwriting and risk pricing. Integration between carrier legacy systems and third-party providers is typically problematic because of proprietary file formats and other issues that make it difficult to share data. In contrast, SaaS provides links to existing interfaces for access to third-party databases. Integration reduces costly, error-prone and time-consuming manual intervention.

IMPERATIVE 7: IMPROVE THE CUSTOMER EXPERIENCE

The majority of insurers (91%) believe that future growth depends on providing a special customer experience, according to Accenture’s survey. However, getting the relevant and up-to-date data they need to give customers a personalized experience is a critical challenge for 95% of respondents.

In the same survey, only 50% of insurers say that their carrier leverages data about customer lifestyles to determine the products and services most likely to meet customer expectations; 70% rate themselves as “average” or “weak” in their ability to tailor products and services to customers’ needs. A similar number (64%) give themselves low ratings for their ability to provide innovative products and services. Poor service — or even average service — is no longer acceptable. Consumers are accustomed to personalized experiences such as shopping on Amazon or booking airline tickets on a travel site, and expect a similar type of experience from their insurer.

Thomas Meyer, managing director of Accenture’s insurance practice, says, “To pursue profitable growth, insurers need to achieve the kind of differentiation that allows organizations like Apple to charge a premium while building customer loyalty. As Apple has shown, the answer is consumer-driven innovation that creates an exceptional user experience.” SasS enables insurers to access the data points they require to differentiate their products throughout the customer experience. In a market commoditized by regulations and related factors, insurers that can leverage SaaS to deliver a straightforward, simple process to customers will give themselves a competitive advantage.

 CONCLUSION

In an accelerated market where change is the new constant, P&C insurance carriers cannot afford to continue to do business as usual. Imperatives such as speed-to market, responsiveness to customer demands, new delivery channels, cost reduction and improved underwriting make it necessary for insurers to explore new methods of providing products and services to customers. SaaS, with its flexibility, scalability and low cost, is a technology imperative if carriers hope to grow and remain competitive.

For the full white paper Oceanwide, click here.