Tag Archives: cooperation

Insurers Must Collaborate on Cyber

We are living in the accumulated aftermath of the countless cyber breaches that, since the turn of the century, have cost the global economy over $2 trillion. We are in the untenable situation where insurers find it nearly impossible to provide security for their insureds while safeguarding their own profitability.

However, the destruction and loss of the past need not be the fate of the future. If cyber liability and technology E&O insurers learn from the recent past, then insurers can help give rise to a future cyber realm that is free from the doubt and fear that are prevalent now.

Over the past two decades, insurers have not worked with members across the private spectrum to put into place unified laws governing the cyber realm, so there are now laws across the world that have been enacted or about to be enacted that are making it more difficult to provide cyber liability insurance. What may be even worse is that, for the past four years or so, different governments have argued against end-to-end encryption (E2EE), and insurers have not responded swiftly to that threat, either. If a country, especially one like the U.S, were to pass a law making E2EE unlawful, then providing cyber liability insurance to anyone would be made more difficult than it already is.

Thus far, insurers rarely speak to each other regarding their most prominent common adversary: hackers. Perhaps the only time that insurers might broach the subject of that adversary is when they are at a NetDiligence or PLUS Cyber Symposium conference, and even then hackers are treated as more of an appetizer than as a main course. If a hacker or hacking group causes five different insurers a combined loss of $50 million, then clearly such attacks represent a inconsequential loss. However, because insurers do not talk to each other, not only do they not know the common methods of attacks on their insureds, along with the collective loss they suffered, but they also have no way to focus efforts on removing that hacking threat. There is also no way to know that a hacker or hacking group is targeting a specific sector of the private sphere, because the only way to know that is through shared intelligence.

Every day, threat actors from nation states or hacking groups or standalone hackers are using the advances in cyber breach techniques learned from each other to create the next unstoppable attack. It is time for insurers to pool their own resources so that they and their insureds can begin to level the playing field with respect to the main adversary so that laws passed are to the benefit of insureds and insurers alike.

Insurers also need to look at the complete picture to be responsible netizens and help craft a safer cyber future. When semiconductor technology in the form of computers began to integrate with the personal and professional realms in the 1980s and into the 1990s, at least in the U.S, it was a very tortured process. Almost as soon as businesses had upgraded to 33Mhz processors, 66Mhz processors came out. Similarly, the original floppy disk drives quickly gave way to 3.5-inch disks, which gave way to Zip drives, CD-Roms and so forth. In software, things were no better. After finally using computers and learning DOS, businesses were introduced to Windows 3.1 and thereafter were upgraded to Windows 95, 98, 98SE and beyond. Every part of binary technology over the past 40 years has seen a relentless drive toward cutting-edge technology, and that pursuit thrust upon the people of this world a technological reality that very few understand.

Today, most people are unable to say what SoC (System on Chip) drives their smartphones, what a GPU stands for, what the differences are between 4G and 5G wireless technologies and what many other basic technological concepts are. Even among insurance professionals, there are still many people who hunt and peck and are unable to achieve a typing speed of 45 words per minute.

Worldwide, almost all schools lack a structured curriculum for the K-12 system that not only teaches binary fundamentals to the young but also helps them to understand computing history and the potential future of computing and networking technology. Consequently, despite the significant numbers of people using social media and smartphones, and the rise of IoT, most people do not know the fundamentals of our present binary world.

Perhaps more damaging is what the future holds. If most people barely understand current technology, then quantum computing, carbon nano tubes and neurotropic technology will be ever more unnerving for even more people. This disparity between the few who understand it, and the tremendous numbers who access the binary world without comprehension, creates a dangerous situation in multiple ways. Yet, this is the situation in which cyber liability and technology E&O insurers are trying to insure a binary usage world.

See also: Future of Insurance to Address Cyber Perils  

With the whole picture in mind, it is time for insurers to start implementing, soonest, solutions that will prevent the future from being like the past two decades. Insurers and insurance brokers alike need to start to act in accordance with what being part of a community means.

In its most basic form, a community is a group of people or organizations that exist in the same area or share a common purpose, and the most successful communities are the ones that come together and put the good of the community ahead of any individual member. Insurers would do well to start to establish a series of townhalls in physical communities to talk about not only what cyber liability and technology E&O are but also go over every aspect of what cybersecurity is, from anti-virus software to which CPUs and GPUs are the least vulnerable, to cyberattacks.

It would be especially helpful if some of these townhall seminars were dedicated to people 65 and older, because many organizations are wanting to “help” seniors without providing them with reasonably secure cyber products. To date, seniors do not seem to have borne the brunt of cyberattacks. However, it is only a matter of time before cyber criminals begin to realize the monetary value of focusing cyberattacks on seniors.

Many insurance professionals are eager to point out that small and medium-sized businesses are extremely vulnerable to cyberattacks, but warnings from a distance are not an acceptable substitute, on such an urgent issue, for face-to-face human interaction. There is a reason that property and auto insurers in the 20th century, used a phrase such as “like a good neighbor, State Farm is there.” A neighbor is a community member who is invested in the success and challenges of others.

With the 2020 U.S census coming up, there still has not been a unified community outreach effort on the part of insurers to help the census begin and end in a secure form at the community level. The most efficient way insurers can help with the census is to provide public libraries and community centers with new computers and networking equipment and lending IT staff.

Insurers also need to work with the cybersecurity community and with K-12 schools around the world so that students understand how to be responsible netizens. There needs to be encouragement in education, from letting the young follow what is popular technologically, to what is actually effective and useful. If
insurers do not work with the cybersecurity community, then how can educators and parents ever really know what responsible netizen activity looks like? Insurers can either work with others to start reducing that deficit, which will also reduce the frequency of breaches, or insurers can repeat their mistakes and forever put their profitability and the safety of their insureds in doubt.

In terms of effective global communication, we who are living now are standing where once stood those who coped with the changes in communication wrought by the printing press and its transformation of the world. However, modern global correspondence faces challenges that require insurers to start putting solutions into place now that will have benefits that last in terms of decades and centuries. With that in mind, it is time for insurers to bring to life an international competition that will encourage students in the seventh to 12th grades to create educational websites or advanced robots or allow for a structured and interactive way for them to point out zero-day exploits and other vulnerabilities that would have a $500 million or larger impact on the world economy if the exploit were to be used against the netizen community.

Insurers also need to start to rate every piece of technology with an independent testing lab. The lab needs to be built with the authority and autonomy to ensure that its ratings are as impartial and accurate as possible so that insurers can work with information that is as close to factual as possible. Insurers also need to tackle higher education and work with an organization like IEEE to finally bring the training of software developers/engineers into the 21st century. It is time for software engineers to have to meet requirements that are on par with structural engineers and attorneys. Not only will this enable a minimum higher level of coding competency, but it will prevent the non-certified engineers from being allowed to put pieces of inept software code into programs upon which this world depends.

Helping the brilliant young become useful and positive contributors to the cyber community, creating an independent testing lab and working with other members of the netizen community to produce certified software engineers can only enable a netizen community that appropriately values and pursues safety, the common good and the future success of the cyber realm. All of this would be to the great benefit of cyber liability and technology E&O insurers and their insureds.

See also: Surveying Wreckage of Cybersecurity  

People often cite the increasingly sophisticated breach techniques of hackers or the hyper evolving technological innovations of technology companies as reasons why dark knight cybersecurity specialists have managed to become so formidable. However, the reality for the rise of hackers is the inaction of implementing long-term solutions by insurers.

Cyber liability and technology E&O insurers perhaps have the best vantage point of any other part of the private sector, because they get to watch in real time everything that happens before, during and after a breach. It is those insurers, especially cyber liability insurers, who say they can help and protect insureds, and who are actively offering their services on the world’s stage. Unfortunately, insurers have thus far acted as if they need only sprint to the finish line to help their insureds. This is not, though, a sprint. It is in fact a very long journey that insurers must undertake.

However, if insurers pace themselves, unite with each other to overcome shared challenges and reach out to other members of the netizen community, then they will be able to leave the winter of desolation behind and step into a future spring that is lively, safe, profitable and enduring.

parties

In Third Parties We (Mis)trust?

Technology is transforming trust. Never before has there been a time when it’s been easier to start a distant geographical relationship. With a credible website and reasonable products or services, people are prepared to learn about companies half a world away and enter into commerce with them.

Society is changing radically when people find themselves trusting people with whom they’ve had no experience, e.g. on eBay or Facebook, more than with banks they’ve dealt with their whole lives.

Mutual distributed ledgers pose a threat to the trust relationship in financial services.

The History of Trust

Trust leverages a history of relationships to extend credit and benefit of the doubt to someone. Trust is about much more than money; it’s about human relationships, obligations and experiences and about anticipating what other people will do.

In risky environments, trust enables cooperation and permits voluntary participation in mutually beneficial transactions that are otherwise costly to enforce or cannot be enforced by third parties. By taking a risk on trust, we increase the amount of cooperation throughout society while simultaneously reducing the costs, unless we are wronged.

Trust is not a simple concept, nor is it necessarily an unmitigated good, but trust is the stock-in-trade of financial services. In reality, financial services trade on mistrust. If people trusted each other on transactions, many financial services might be redundant.

People use trusted third parties in many roles in finance, for settlement, as custodians, as payment providers, as poolers of risk. Trusted third parties perform three roles:

  • validate – confirming the existence of something to be traded and membership of the trading community;
  • safeguard – preventing duplicate transactions, i.e. someone selling the same thing twice or “double-spending”;
  • preserve – holding the history of transactions to help analysis and oversight, and in the event of disputes.

A ledger is a book, file or other record of financial transactions. People have used various technologies for ledgers over the centuries. The Sumerians used clay cuneiform tablets. Medieval folk split tally sticks. In the modern era, the implementation of choice for a ledger is a central database, found in all modern accounting systems. In many situations, each business keeps its own central database with all its own transactions in it, and these systems are reconciled, often manually and at great expense if something goes wrong.

But in cases where many parties interact and need to keep track of complex sets of transactions they have traditionally found that creating a centralized ledger is helpful. A centralized transaction ledger needs a trusted third party who makes the entries (validates), prevents double counting or double spending (safeguards) and holds the transaction histories (preserves). Over the ages, centralized ledgers are found in registries (land, shipping, tax), exchanges (stocks, bonds) or libraries (index and borrowing records), just to give a few examples.

The latest technological approach to all of this is the distributed ledger (aka blockchain aka distributed consensus ledger aka the mutual distributed ledger, or MDL, the term we’ll stick to here). To understand the concept, it helps to look back over the story of its development:

 1960/’70s: Databases

The current database paradigm began around 1970 with the invention of the relational model, and the widespread adoption of magnetic tape for record-keeping. Society runs on these tools to this day, even though some important things are hard to represent using them. Trusted third parties work well on databases, but correctly recording remote transactions can be problematic.

One approach to remote transactions is to connect machines and work out the lumps as you go. But when data leaves one database and crosses an organizational boundary, problems start. For Organization A, the contents of Database A are operational reality, true until proven otherwise. But for Organization B, the message from A is a statement of opinion. Orders sit as “maybe” until payment is made, and is cleared past the last possible chargeback: This tentative quality is always attached to data from the outside.

1980/’90s: Networks

Ubiquitous computer networking came of age two decades after the database revolution, starting with protocols like email and hitting its full flowering with the invention of the World Wide Web in the early 1990s. The network continues to get smarter, faster and cheaper, as well as more ubiquitous – and it is starting to show up in devices like our lightbulbs under names like the Internet of Things. While machines can now talk to each other, the systems that help us run our lives do not yet connect in joined-up ways.

Although in theory information could just flow from one database to another with your permission, in practice the technical costs of connecting databases are huge. Worse, we go back to paper and metaphors from the age of paper because we cannot get the connection software right. All too often, the computer is simply a way to fill out forms: a high-tech paper simulator. It is nearly impossible to get two large entities to share our information between them on our behalf.

Of course, there are attempts to clarify this mess – to introduce standards and code reusability to help streamline business interoperability. You can choose from EDI, XMI-EDI, JSON, SOAP, XML-RPC, JSON-RPC, WSDL and half a dozen more standards to “assist” your integration processes. The reason there are so many standards is because none of them finally solved the problem.

Take the problem of scaling collaboration. Say that two of us have paid the up-front costs of collaboration and have achieved seamless technical harmony, and now a third partner joins our union, then a fourth and a fifth … by five partners, we have 13 connections to debug, by 10 partners the number is 45. The cost of collaboration keeps going up for each new partner as they join our network, and the result is small pools of collaboration that just will not grow. This isn’t an abstract problem – this is banking, this is finance, medicine, electrical grids, food supplies and the government.

A common approach to this quadratic quandary is to put somebody in charge, a hub-and-spoke solution. We pick an organization – Visa would be typical – and all agree that we will connect to Visa using its standard interface. Each organization has to get just a single connector right. Visa takes 1% off the top, making sure that everything clears properly.

But while a third party may be trusted, it doesn’t mean it is trustworthy. There are a few problems with this approach, but they can be summarized as “natural monopolies.” Being a hub for others is a license to print money for anybody that achieves incumbent status. Visa gets 1% or more of a very sizeable fraction of the world’s transactions with this game; Swift likewise.

If you ever wonder what the economic upside of the MDL business might be, just have a think about how big that number is across all forms of trusted third parties.

2000/’10s: Mutual Distributed Ledgers

MDL technology securely stores transaction records in multiple locations with no central ownership. MDLs allow groups of people to validate, record and track transactions across a network of decentralized computer systems with varying degrees of control of the ledger. Everyone shares the ledger. The ledger itself is a distributed data structure held in part or in its entirety by each participating computer system. The computer systems follow a common protocol to add transactions. The protocol is distributed using peer-to-peer application architecture. MDLs are not technically new – concurrent and distributed databases have been a research area since at least the 1970s. Z/Yen built its first one in 1995.

Historically, distributed ledgers have suffered from two perceived disadvantages; insecurity and complexity. These two perceptions are changing rapidly because of the growing use of blockchain technology, the MDL of choice for cryptocurrencies. Cryptocurrencies need to:

  • validate – have a trust model for time-stamping transactions by members of the community;
  • safeguard – have a set of rules for sharing data of guaranteed accuracy;
  • preserve – have a common history of transactions.

If faith in the technology’s integrity continues to grow, then MDLs might substitute for two roles of a trusted third party, preventing duplicate transactions and providing a verifiable public record of all transactions. Trust moves from the third party to the technology. Emerging techniques, such as, smart contracts and decentralized autonomous organizations, might in future also permit MDLs to act as automated agents.

A cryptocurrency like bitcoin is an MDL with “mining on top.” The mining substitutes for trust: “proof of work” is simply proof that you have a warehouse of expensive computers working, and the proof is the output of their calculations! Cryptocurrency blockchains do not require a central authority or trusted third party to coordinate interactions, validate transactions or oversee behavior.

However, when the virtual currency is going to be exchanged for real-world assets, we come back to needing trusted third parties to trade ships or houses or automobiles for virtual currency. A big consequence may be that the first role of a trusted third party, validating an asset and identifying community members, becomes the most important. This is why MDLs may challenge the structure of financial services, even though financial services are here to stay.

Boring ledgers meet smart contracts

MDLs and blockchain architecture are essentially protocols that can work as well as hub-and-spoke for getting things done, but without the liability of a trusted third party in the center that might choose to exploit the natural monopoly. Even with smaller trusted third parties, MDLs have some magic properties, the same agreed data on all nodes, “distributed consensus,” rather than passing data around through messages.

In the future, smart contracts can store promises to pay and promises to deliver without having a middleman or exposing people to the risk of fraud. The same logic that secured “currency” in bitcoin can be used to secure little pieces of detached business logic. Smart contracts may automatically move funds in accordance with instructions given long ago, like a will or a futures contract. For pure digital assets there is no counterparty risk because the value to be transferred can be locked into the contract when it is created, and released automatically when the conditions and terms are met: If the contract is clear, then fraud is impossible, because the program actually has real control of the assets involved rather than requiring trustworthy middle-men like ATM machines or car rental agents. Of course, such structures challenge some of our current thinking on liquidity.

Long Finance has a Zen-style koan, “if you have trust I shall give you trust; if you have no trust I shall take it away.” Cryptocurrencies and MDLs are gaining more and more trust. Trust in contractual relationships mediated by machines sounds like science fiction, but the financial sector has profitably adapted to the ATM machine, Visa, Swift, Big Bang, HFT and many other innovations. New ledger technology will enable new kinds of businesses, as reducing the cost of trust and fixing problems allows new kinds of enterprises to be profitable. The speed of adoption of new technology sorts winners from losers.

Make no mistake: The core generation of value has not changed; banks are trusted third parties. The implication, though, is that much more will be spent on identity, such as Anti-Money-Laundering/Know-Your-Customer backed by indemnity, and asset validation, than transaction fees.

A U.S. political T-shirt about terrorists and religion inspires a closing thought: “It’s not that all cheats are trusted third parties; it’s that all trusted third parties are tempted to cheat.” MDLs move some of that trust into technology. And as costs and barriers to trusted third parties fall, expect demand and supply to increase.