Tag Archives: Bank of England

What Should Future of Regulation Be?

It is of course much easier to look back and second-guess regulatory actions. It is far more difficult to propose a way forward and to do so in light of the emerging hot-button issues, including data and the digitization of the industry, insurtech (and regtech), emerging and growing risks, cyber, the Internet of Things (IoT), natural catastrophes, longevity and growing protectionism. The way forward requires consideration of the primary goals of insurance regulation and raises critical questions regarding how regulators prioritize their work and how they interact with one another, with the global industry and with consumers.

We offer below some thoughts and suggestions on these important questions and on how regulation might best move forward over the next 10 years.

Establish a reasonable construct for regulatory relationships.

Relationships matter, and it is imperative for there to be careful consideration of how regulators organize their interactions and reliance on each other. We have some examples in the form of the Solvency II equivalence assessment process, the NAIC’s Qualified Jurisdiction assessment process (under the U.S. credit for reinsurance laws), the NAIC’s accreditation process for the states of the U.S., the U.S.-E.U. Covered Agreement, ComFrame, the IAIS and NAIC’s memorandum of ynderstanding and the IMF financial sector assessment program (FSAP). Each of these provide varying degrees of assessment and regulatory cooperation/reliance.

These processes and protocols, however, have largely emerged on an ad hoc, unilateral basis and in some cases have had a whiff of imperial judgment about them that may not be justified – and certainly is off-putting to counterparties. We would urge regulators to give careful consideration to the goals, guiding principles and the process for achieving greater levels of cooperation and reliance among global regulators.

We hope these efforts would include an appreciation that different approaches/systems can achieve similar results that no jurisdiction has a monopoly on good solvency regulation. There must also be respect for and recognition of local laws and a recognition that regulatory cooperation and accommodation will benefit regulators, the industry and consumers. Most importantly, regulators need to work together to develop confidence and trust in one another.

The IAIS first coined the phrase “supervisory recognition” in 2009. In March of that year, the IAIS released an “issues paper on group-wide solvency assessment and supervision.” That paper stated that:

“To the extent there is not convergence of supervisory standards and practices, supervisors can pursue processes of ‘supervisory recognition’ in an effort to enhance the effectiveness and efficiency of supervision. Supervisory recognition refers to supervisors choosing to recognize and rely on the work of other supervisors, based on an assessment of the counterpart jurisdiction’s regulatory regime.”

See also: Global Trend Map No. 14: Regulation  

The paper noted the tremendous benefits that can flow from choosing such a path:

“An effective system of supervisory recognition could reduce duplication of effort by the supervisors involved, thereby reducing compliance costs for the insurance industry and enhancing market efficiency. It would also facilitate information sharing and cooperation among those supervisors.”

This is powerful. We urge global insurance regulators to take a step back and consider how they can enhance regulatory effectiveness and efficiency by taking reasonable and prudential steps to recognize effective regulatory regimens − even where these systems are based on different (perhaps significantly different) rules and principles, but which have a demonstrated track record of effectiveness.

As noted above, we have seen some efforts at supervisory recognition. These include Solvency II’s equivalence assessment process, the NAIC’s accreditation process for other U.S. states, the NAIC “Qualified Jurisdictions” provisions for identifying jurisdictions that U.S. regulators will rely on for purposes of lowering collateral requirements on foreign reinsurers, the E.U.-U.S. Covered Agreement and the IAIS’s Memorandum on Mutual Understanding. Some of these processes are more prescriptive than others and have the danger of demanding that regulatory standards be virtually identical to be recognized. This should be avoided.

One size for all is not the way to go.

The alternative approach to recognition of different, but equally effective systems is the pursuit of a harmonized, single set of regulatory standards for global insurers. This approach is much in vogue among some regulators, who assert the “need for a common language” or for “a level playing field” or to avoid “regulatory arbitrage.” Some regulators also argue that common standards will lead to regulatory nirvana, where one set of rules will apply to all global insurers, which will then be able to trade seamlessly throughout all markets.

There are, however, a variety of solvency and capital systems that have proven their effectiveness. These systems are not identical, and indeed they have some profoundly different regulatory structures, accounting rules and other standards such as the systems deployed in the E.U. (even pre-Solvency II), the U.S., Canada, Japan, Bermuda, Australia, Switzerland and others. Attempting to assert a signal system or standard ignores commercial, regulatory, legal, cultural and political realities.

Moreover, we question some of the rationale for pursuing uniform standards, including the need for a common language. We suggest that what is really needed is for regulators to continue to work together, to discuss their respective regulatory regimes and to develop a deep, sophisticated knowledge of how their regimes work. From this, trust will develop, and from that a more effective and efficient system of regulation is possible. The engagement and trust building can happen within supervisory colleges. We have seen it emerge in the context of the E.U.-U.S. regulatory dialogue. We saw it in the context of the E.U.-U.S. Covered Agreement. No one, however, has made a compelling case for why one regulatory language is necessary to establish a close, effective working relationship among regulators.

Similarly, the call for a level playing field sounds good, but it is an amorphous, ambiguous term that is rarely, if ever, defined. Does the “playing field” include just regulatory capital requirements? If so, how about tax, employment rules, social charges? How about 50 subnational regulators versus one national regulator? Guarantee funds? Seeking a level playing field can also be code for, “My system of regulation is heavier, more expensive than yours, so I need to put a regulatory thumb on the scales to make sure you have equally burdensome regulations.” This argument was made for decades in the debate surrounding the U.S. reinsurance collateral rules. We hear it now regarding the burdens of Solvency II. It must be asked, however, whether it is the responsibility of prudential regulators to be leveling playing fields, or should their focus be solely on prudent regulatory standards for their markets.

Finally, the dark specter of regulatory arbitrage is often asserted as a reason to pursue a single regulatory standard, such as the development of the ICS by the IAIS. But one must ask if there is really a danger of regulatory arbitrage today among global, internationally active insures? Yes, a vigilant eye needs to kept for a weak link in the regulatory system, something the IMF FSAP system has sought to do, supervisory colleges can do and the IAIS is well-equipped to do. But using regulatory arbitrage as an argument to drive the establishment of the same standards for all insurers does not seem compelling.

Proportionality is required.

Often, regulators roll out new regulatory initiatives with the phrase that the new rules will be “proportionate” to the targeted insurers. Too often, it seems there is just lip service to this principle. Rarely is it defined – but it is tossed out in an attempt to say, “Do not worry, the new rules will not be excessive.” Greater debate and greater commitment to this principle is needed. Clearly a key component of it must be a careful cost/benefit analysis of any proposed new standard, with a clear articulation of the perceived danger to be addressed – including the likelihoods and severity of impact and then a credible calculation of the attendant costs – economic and otherwise to industry and to regulators. In October 2017, the U.K. Treasury Select Committee published a report criticizing the PRA for its excessively strict interpretation of Solvency II and its negative effect on the competitiveness of U.K. insurers. The report concluded that the PRA had enhanced policyholder protection at the expense of increasing the cost of capital for U.K. insurers, which hurt their ability to provide long-term investments and annuities. Although the PRA emphasized its mandate of prudential regulation and policy holder protection, the Treasury Committee reiterated its concern with how the PRA interpreted the principle of proportionality.

Simplicity rather than complexity.

Over the past 10 years, there has been a staggering increase in proposed and enacted regulatory requirements, many of which are catalogued above. There is a danger, however, that increasingly complex regulatory tools can create their own regulatory blind spots and that overly complex regulations can create a regulatory “fog of war.”

Andrew Haldane, executive director at the Bank of England, in August 2012 delivered a paper at a Federal Reserve Bank of Kansas City’s economic policy symposium, titled “The Dog and the Frisbee.” He graphically laid out when less is really more by talking about two ways of catching a Frisbee: One can “weigh a complex array of physical and atmospheric factors, among them wind speed and Frisbee rotation” − or one can simply catch the Frisbee, the way a dog does. Complex rules, Haldane said, may cause people to manage to the rules for fear of falling in conflict with them. The complexity of the rules may induce people to act defensively and focus on the small print at the expense of the bigger picture.

Focusing on the complexity of the banking world, Haldane compared the 20 pages of the Glass-Steagall Act to the 848 pages of Dodd-Frank together with its 30,000 pages of rulemaking, and compared the 18 pages of Basel 1 to the more than 1,000 pages of Basel III. The fundamental question is whether that additional detail and complexity really adds greater safety to the financial system or has just the opposite effect and significantly increases the cost. Haldane’s analysis provides compelling evidence that increasing the complexity of financial regulation is a recipe for continuing crisis. Accordingly, Haldane calls for a different direction for supervisors with “…fewer (perhaps far fewer), and more (ideally much more) experienced supervisors, operating to a smaller, less detailed rule book.”

Although Haldane’s analysis and discussion focuses on the banking system, his assessment and recommendations should be considered carefully by global insurance regulators. The sheer volume and complexity of rules, models and reports that flood into regulatory bodies raise the real question of who reviews this information, who really understands it and, worst of all, does a mountain of detailed information create a false confidence that regulators have good visibility into the risks – particular the emerging risks – that insurers are facing? A real danger exists of not seeing the forest for the trees.

See also: To Predict the Future, Try Creating It  

Regulation should promote competitiveness rather than protectionism.

At a time when competition has been growing not only from within the established companies but also more importantly from outside the traditional companies, protectionism will only inhibit growth and stifle better understanding of risk in a rapidly changing business environment. The goal must be to make the industry more competitive and to encourage transfer of innovation and create better ways to address risk, distribution of products and climate changes. Protectionism will only limit the potential of growth of the industry and is both short-sighted and self-defeating.

Recognition of the importance of positive disruption through insurtech, fintech and innovation.

The consensus is that the insurance industry is ripe for disruption because it has been slow (but is now working hard) to modernize in view of an array of innovative and technological advancements. Equally, regulators are trying to catch up with the rapid changes and are trying to understand the impacts through sandbox experiments and running separate regulatory models. The pace is fast and presents challenges for the regulators. Solvency and policyholder protection remain paramount, but cybersecurity, data protection, artificial intelligence and the digital revolution make advancements every day. Where this will lead is not clear. But changes are happening and regulators must work to understand the impact and need to calibrate regulatory rules to keep up with the industry and encourage innovation.

Regulation must be transparent.

Too often, regulation is drafted in times of crisis or behind closed doors by regulators believing they know better how to protect policy holders and how to prevent abuse of the system. As we have said, getting it right matters. A strong and healthy industry is the best way to protect consumers and policy holders. Industry engagement is essential and acknowledging and actually incorporating industry’s views is critical. This is particularly true given the dramatic changes in the insurance sector and the need to adopt regulation to new economics, business practices and consumer needs and expectations

This is an excerpt from a report, the full text of which is available here.

Urgent Need on ‘Silent’ Cyber Risks

This is an unprecedented time for insurers. As margins associated with conventional lines of coverage continue to tighten, pressure is increasing to offer new forms of coverage to respond to the emerging cyber threats facing insureds in today’s digital economy. At the same time, insurers are compelled to make certain that those risks are effectively excluded from coverage under many other “traditional” policy forms.

Unfortunately for underwriters of both traditional and newer policy forms, emerging cyber threats can be difficult, if not impossible, to predict and factor into underwriting and policy drafting processes. But as we’ve already seen in the context of cyber incidents, today’s unknown cyber threat can become tomorrow’s front-page news and unanticipated limits payout. And if that threat is spread across multiple insureds in an insurer’s coverage portfolio, the bottom-line effect of the aggregated losses could be devastating. Making matters worse — as recently recognized by the Bank of England’s Prudential Regulation Authority (PRA) — these “silent” cyber exposures can simultaneously affect multiple lines of coverage, (including casualty, marine, aviation and transport), affecting both direct and facultative coverages.

See also: A Revolution in Risk Management  

Imagine this scenario:

Company A manufactures components used in the Wi-Fi systems of commercial airliners. Mr. X, a disgruntled employee of Company A, purposely inserts a software coding vulnerability into the components, which were then sold to Company B, a leading manufacturer of commercial jetliners. Company B incorporates Company A’s components into its jetliners and then sells 30 of them to three major U.S. commercial airlines. Company A also sells the affected components to Company C, which manufactures and sells private charter jets. Company C sells 15 jets containing Company A’s vulnerable components to various private individuals and corporations.

Once the planes are in operation, Mr. X remotely exploits the vulnerability in the aircraft, causing three in-flight planes to go down in populated areas. Plane 1 crashes into a medical center in Small Town. Plane 2 destroys an electrical power station in Mega City, plunging half of the city into darkness. Plane 3, a private corporate jet, causes serious damage to a bridge that is heavily used by a commuter rail service in Sunny City, rendering it unusable and making it virtually impossible for thousands of commuters to get to work.

Widespread panic immediately ensues after the crashes. All U.S. air traffic is halted pending an investigation of the cause. There are numerous traffic accidents and looting incidents following the blackout in Mega City, and many organizations are forced to close indefinitely. Mr. X then contacts Company C and the three airlines that purchased the affected jetliners and demands $1 billion in exchange for revealing the vulnerability.

This obviously is an unlikely scenario, but as technology continues to be used in novel ways, it is important to recognize what will be possible. This scenario was created to highlight a complex casualty catastrophe initiated from a technological weakness in an increasingly connected world. While crashing planes are terrifying, the bigger takeaway is that this was not a possible scenario prior to recent technological developments. It isn’t difficult to see how the multiple insurance coverages triggered from the above scenario could result in insured losses well in excess of $20 billion. Individual company losses could be disastrous, given the previously uncorrelated nature of individual lines of businesses that would be affected. While technology forges new connections among businesses and individuals, the connections have ushered in the new risk of technology initiated catastrophe scenarios, recently labeled as a “Cyber Andrew” scenario, in reference to Hurricane Andrew, which resulted in losses few insurers previously believed possible.

The continued expansion of loss causes, courtesy of new technology, will have implications for both legacy insurance and new cyber insurance contracts. This means that insurers must assimilate expanding possibilities into risk management processes including Probable Maximum Loss (“PML”), risk aggregations and risk appetites. At the core of the silent cyber hurdle is: Do current risk management systems capture all possible risks today, and will they capture what can happen tomorrow, before a “Cyber Andrew” hits?

See also: Can Risk Management Even Be Effective?  

This challenge, if the PRA is to be believed, is currently not being met. As the conversations continue to escalate to the C-suite, risk managers need access to a team with specialized skill sets to better understand and calculate the impact of new technology into their enterprise risk management plans. At the same time, this added focus on technology will continue to expand reporting requirements. Providing detailed yet clear reporting to the board that highlights the full impact of current technologies on the comprehensive insurance portfolio will be a minimum standard.

As technology continues to advance, insurers’ risk management tools and resources must evolve. Each organization will face its own distinct hurdles based on individual characteristics of its insurance portfolio, and its solution should be just as individualized. There will not be one magic bullet that ends cyber risk. The keys to meeting this challenge will be understanding new and emerging risks and assembling a team of professionals with the prerequisite skills to address the issues.

What Is and What Isn’t a Blockchain?

I Block, Therefore I Chain?

What is, and what isn’t, a “blockchain”?  The Bitcoin cryptocurrency uses a data structure that I have often termed as part of a class of “mutual distributed ledgers.” Let me set out the terms as I understand them:

  • ledger – a record of transactions;
  • distributed – divided among several or many, in multiple locations;
  • mutual – shared in common, or owned by a community;
  • mutual distributed ledger (MDL) – a  record of transactions shared in common and stored in multiple locations;
  • mutual distributed ledger technology – a technology that provides an immutable record of transactions shared in common and stored in multiple locations.

Interestingly, the 2008 Satoshi Nakamoto paper that preceded the Jan. 1, 2009, launch of the Bitcoin protocol does not use the term “blockchain” or “block chain.” It does refer to “blocks.” It does refer to “chains.” It does refer to “blocks” being “chained” and also a “proof-of-work chain.” The paper’s conclusion echoes a MDL – “we proposed a peer-to-peer network using proof-of-work to record a public history of transactions that quickly becomes computationally impractical for an attacker to change if honest nodes control a majority of CPU power.” [Satoshi Nakamoto, Bitcoin: A Peer-to-Peer Electronic Cash System, bitcoin.org (2008)]

I have been unable to find the person who coined the term “block chain” or “blockchain.” [Contributions welcome!] The term “blockchain” only makes it into Google Trends in March 2012, more than three years from the launch of the Bitcoin protocol.

Blockchain

And the tide may be turning. In July 2015, the States of Jersey issued a consultation document on regulation of virtual currencies and referred to “distributed ledger technology.” In January 2016, the U.K. Government Office of Science fixed on “distributed ledger technology,” as does the Financial Conduct Authority and the Bank of England. Etymological evolution is not over.

Ledger Challenge

Wuz we first? Back in 1995, our firm, Z/Yen, faced a technical problem. We were building a highly secure case management system that would be used in the field by case officers on personal computers. Case officers would enter confidential details on the development and progress of their work. We needed to run a large concurrent database over numerous machines. We could not count on case officers out on the road dialing in or using Internet connections. Given the highly sensitive nature of the cases, security was paramount, and we couldn’t even trust the case officers overly much, so a full audit trail was required.

We took advantage of our clients’ “four eyes” policy. Case officers worked on all cases together with someone else, and not on all cases with the same person. Case officers had to jointly agree on a final version of a case file. We could count on them (mostly) running into sufficient other case officers over a reasonable period and using their encounters to transmit data on all cases. So we built a decentralized system where every computer had a copy of everything, but encrypted so case officers could only view their own work, oblivious to the many other records on their machines. When case officers met each other, their machines would “openly” swap their joint files over a cable or floppy disk but “confidentially” swap everyone else’s encrypted files behind the scenes, too. Even back at headquarters, four servers treated each other as peers rather than having a master central database. If a case officer failed to “bump into” enough people, then he or she would be called and asked to dial in or meet someone or drop by headquarters to synchronize.  This was, in practice, rarely required.

We called these decentralized chains “data stacks.” We encrypted all of the files on the machines, permitting case officers to share keys only for their shared cases. We encrypted a hash of every record within each subsequent record, a process we called “sleeving.” We wound up with a highly successful system that had a continuous chain of sequentially encrypted records across multiple machines treating each other as peers. We had some problems with synchronizing a concurrent database, but they were surmounted.

Around the time of our work, there were other attempts to do similar highly secure distributed transaction databases, e.g. Ian Griggs and Ricardo on payments, Stanford University and LOCKSS and CLOCKSS for academic archiving. Some people might point out that we weren’t probably truly peer-to-peer, reserving that accolade for Gnutella in 2000. Whatever. We may have been bright, perhaps even first, but were not alone.

Good or Bad Databases?

In a strict sense, MDLs are bad databases. They wastefully store information about every single alteration or addition and never delete.

In another sense, MDLs are great databases. In a world of connectivity and cheap storage, it can be a good engineering choice to record everything “forever.” MDLs make great central databases, logically central but physically distributed. This means that they eliminate a lot of messaging. Rather than sending you a file to edit, which you edit, sending back a copy to me, then sending a further copy on to someone else for more processing, all of us can access a central copy with a full audit trail of all changes. The more people involved in the messaging, the more mutual the participation, the more efficient this approach becomes.

Trillions of Choices

Perhaps the most significant announcement of 2015 was in January from IBM and Samsung. They announced their intention to work together on mutual distributed ledgers (aka blockchain technology) for the Internet-of Things. ADEPT (Autonomous Decentralized Peer-to-Peer Telemetry) is a jointly developed system for distributed networks of devices.

In summer 2015, a North American energy insurer raised an interesting problem with us. It was looking at insuring U.S. energy companies about to offer reduced electricity rates to clients that allowed them to turn appliances on and off — for example, a freezer. Now, freezers in America can hold substantial and valuable quantities of foodstuffs, often several thousand dollars. Obviously, the insurer was worried about correctly pricing a policy for the electricity firm in case there was some enormous cyber-attack or network disturbance.

Imagine coming home to find your freezer off and several thousands of dollars of thawed mush inside. You ring your home and contents insurer, which notes that you have one of those new-fangled electricity contracts: The fault probably lies with the electricity company; go claim from them. You ring the electricity company. In a fit of customer service, the company denies having anything to do with turning off your freezer; if anything, it was probably the freezer manufacturer that is at fault. The freezer manufacturer knows for a fact that there is nothing wrong except that you and the electricity company must have installed things improperly. Of course, the other parties think, you may not be all you seem to be. Perhaps you unplugged the freezer to vacuum your house and forgot to reconnect things. Perhaps you were a bit tight on funds and thought you could turn your frozen food into “liquid assets.”

I believe IBM and Samsung foresee, correctly, 10 billion people with hundreds of ledgers each, a trillion distributed ledgers. My freezer-electricity-control-ledger, my entertainment system, home security system, heating-and-cooling systems, telephone, autonomous automobile, local area network, etc. In the future, machines will make decisions and send buy-and-sell signals to each other that have large financial consequences. Somewhat coyly, we pointed out to our North American insurer that it should perhaps be telling the electricity company which freezers to shut off first, starting with the ones with low-value contents.

A trillion or so ledgers will not run through a single one. The idea behind cryptocurrencies is “permissionless” participation — any of the billions of people on the planet can participate. Another way of looking at this is that all of the billions of people on the planet are “permissioned” to participate in the Bitcoin protocol for payments. The problem is that they will not be continuous participants. They will dip in and out.

Some obvious implementation choices are: public vs. private? Is reading the ledger open to all or just to defined members of a limited community? Permissioned vs. permissionless? Are only people with permission allowed to add transactions, or can anyone attempt to add a transaction? True peer-to-peer or merely decentralized? Are all nodes equal and performing the same tasks, or do some nodes have more power and additional tasks?

People also need to decide if they want to use an existing ledger service (e.g. Bitcoin, Ethereum, Ripple), copy a ledger off-the-shelf, or build their own. Building your own is not easy, but it’s not impossible. People have enough trouble implementing a single database, so a welter of distributed databases is more complex, sure. However, if my firm can implement a couple of hundred with numerous variations, then it is not impossible for others.

The Coin Is Not the Chain

Another sticking point of terminology is adding transactions. There are numerous validation mechanisms for authorizing new transactions, e.g. proof-of-work, proof-of-stake, consensus or identity mechanisms. I divide these into “proof-of-work,”  i.e. “mining,” and consider all others various forms of “voting” to agree. Sometimes, one person has all the votes. Sometimes, a group does. Sometimes, more complicated voting structures are built to reflect the power and economic environment in which the MDL operates. As Stalin said, “I consider it completely unimportant who in the party will vote, or how; but what is extraordinarily important is this — who will count the votes, and how.”

As the various definitions above show, the blockchain is the data structure, the mechanism for recording transactions, not the mechanism for authorizing new transactions. So the taxonomy starts with an MDL or shared ledger; one kind of MDL is a permissionless shared ledger, and one form of permissionless shared ledger is a blockchain.

Last year, Z/Yen created a timestamping service, MetroGnomo, with the States of Alderney. We used a mutual distributed ledger technology, i.e. a technology that provides an immutable record of transactions shared in common and stored in multiple locations. However, we did not use “mining” to authorize new transactions. Because the incentive to cheat appears irrelevant here, we used an approach called “agnostic woven” broadcasting from “transmitters” to “receivers” — to paraphrase Douglas Hofstadter, we created an Eternal Golden Braid.

So is MetroGnomo based on a blockchain? I say that MetroGnomo uses a MDL, part of a wider family that includes the Bitcoin blockchain along with others that claim use technologies similar to the Bitcoin blockchain. I believe that the mechanism for adding new transactions is novel (probably). For me, it is a moot point if we “block” a group of transactions or write them out singly (blocksize = 1).

Yes, I struggle with “blockchain.” When people talk to me about blockchain, it’s as if they’re trying to talk about databases yet keep referring to “The Ingres” or “The Oracle.” They presume the technological solution, “I think I need an Oracle” (sic), before specifying the generic technology, “I think I need a database.” Yet I also struggle with MDL. It may be strictly correct, but it is long and boring. Blockchain, or even “chains” or “ChainZ” is cuter.

We have tested alternative terms such as “replicated authoritative immutable ledger,” “persistent, pervasive,and permanent ledger” and even the louche “consensual ledger.” My favorite might be ChainLedgers. Or Distributed ChainLedgers. Or LedgerChains. Who cares about strict correctness? Let’s try to work harder on a common term. All suggestions welcome!