Tag Archives: y2k

Y2K Rears Its Head One More Time

In the late 1990s, in the run up to Jan. 1, 2000, insurers deployed Y2K or “electronic date recognition” exclusions into a multitude of insurance policies. The logic made sense: The Y2K date change was a known risk and something that firms should have worked to eliminate, and, if Armageddon did materialize, well, that’s not something that the insurance industry wanted to cover anyway.

Sixteen years later, one would expect to find Y2K exclusions only in the Lloyds of London “Policy Wording Hall of Fame.” But no so fast.

Electronic date recognition exclusions are still frequently included in a variety of insurance contracts, even though it’s doubtful that many folks have given them more than a passing glance while chuckling about the good old days. And now is the time to take a closer look.

Last month, various cybersecurity response firms discovered that a new variant of the Shamoon malware was used to attack a number of firms in the Middle East. In 2012, the original version was used to successfully attack Saudi Aramco and resulted in its needing to replace tens of thousands of desktop computers. Shamoon was used shortly thereafter to attack RasGas, and, most notoriously, the malware was used against Sony Pictures in late 2014. Shamoon has caused hundreds of millions of dollars of damages.

The new version, Shamoon v2, changes the target computer’s system clock to a random date in August 2012 — according to research from FireEye, the change may be designed to make sure that a piece of software subverted for the attack hasn’t had its license expire.

This change raises issues under existing electronic date recognition exclusions because many are not specifically limited to Jan. 1, 2000; they instead feature an “any other date” catch all. For example, one of the standard versions reads, in part:

“This Policy does not cover any loss, damage, cost, claim or expense, whether preventative, remedial or otherwise, directly or indirectly arising out of or relating to any change, alteration, or modification involving the date change to the year 2000, or any other date change, including leap year calculations, to any such computer system, hardware, program or software and/or any microchip, integrated circuit or similar device in computer equipment or non-computer equipment, whether the property of the Insured or not.”

See also: Insurance Is NOT a Commodity!  

By our estimation, this exclusion is written broadly enough to exclude any losses resulting from a Shamoon v2 attack, if indeed the malware’s success is predicated on the change in system dates to 2012.

Given that the types of losses that Sony and Saudi Aramco suffered can be insured, firms shouldn’t be caught off guard. We advise a twofold approach: Work with your insurance broker to either modify language or consider alternative solutions; and ensure that your cybersecurity leaders are monitoring your systems for indicators of compromise, including subtle measures like clock changes.

Unstructured Data: New Cyber Worry

Companies are generating mountains of unstructured data and, in doing so, unwittingly adding to their security exposure.

Unstructured data is any piece of information that doesn’t get stored in a database or some other formal data management system. Some 80% of business data is said to be unstructured, and that percentage has to be rising. Think of it as employee-generated business information—the sum total of human ingenuity that we display in the workplace, typing away on productivity and collaboration software and dispersing our pearls of wisdom in digital communications.

Free IDT911 white paper: Breach, Privacy, And Cyber Coverages: Fact And Fiction

Unstructured data is all of the data that we are generating on our laptops and mobile devices, storing in cloud services, transferring in email and text messages and pitching into social media sites.

Many companies are just starting to come to grips with the complex challenge of figuring out how to categorize and manage this deluge of unstructured data.

Sensitive data at risk

But what’s more concerning is the gaping security exposure.

It was unstructured data—in the form of a text message transcript of employees conversing about deflating footballs—that blindsided the New England Patriots NFL team and its star quarterback, Tom Brady.

Yet the full scope of risk created by unstructured data is much more profound.

“The risk that unstructured data poses dwarfs that of any other type of data,” says Adam Laub, product management vice president at STEALTHbits Technologies. “It is the least understood form of data in terms of access, activity, ownership and content.”

STEALTHbits helps companies that use Windows Active Directory identify and keep more detailed track of shared files that hold unstructured data. That may sound basic. Yet the fact that STEALTHbits is part of a thriving cottage industry of technology vendors helping organizations get a grip on unstructured data is truly a sign of the times. I met with Laub as he was pitching STEALTHbits’ technology at the recent RSA Conference in San Francisco. “Any single file can contain the data that puts an organization in the headlines, and turning a blind eye to the problem or claiming it’s too big to handle is not a valid excuse for why unstructured data hasn’t been secured properly,” Laub says.

A decade and a half has elapsed since the Y2K scare. During that period, business networks have advanced and morphed and now tie extensively into the Internet cloud and mobile devices.

Time to close loophole

Along the way, no one had the foresight to champion a standard architecture to keep track of—much less manage and secure—unstructured data, which continues to grow by leaps and bounds.

Criminals certainly recognize the opportunity for mischief that has resulted. It’s difficult to guard the cream when the cream can be accessed from endless digital paths.

Just ask Morgan Stanley. Earlier this year, a low-ranking Morgan Stanley financial adviser pilfered, then posted for sale, account records, including passwords, for 6 million clients. The employee was fired and is being investigated by the FBI. But Morgan Stanley has to deal with the hit to its reputation.

“The urgency is that your information is under attack today,” says Ronald Arden, vice president at Fasoo USA, a data management technology vendor. “Somebody is trying to steal your most important information, and it doesn’t matter if you’re a small company that makes widgets for the oil and gas industry or you’re Bank of America.”

Fasoo’s technology encrypts any newly generated data that could be sensitive and fosters a process for classifying which types of unstructured data should routinely be locked down, Arden told me.

Technology solutions, of course, are only as effective as the people and processes in place behind them. It is incumbent upon executives, managers and employees to help make security part and parcel of the core business mission. Those that don’t do this will continue to be easy targets.

Steps forward

Simple first steps include identifying where sensitive data exists. This should lead to clarity about data ownership and better choices about granting access to sensitive data, says STEALTHbits’ Laub.

This can pave the way to more formal “Data Access Governance” programs, in which data access activities are monitored and user behaviors are baselined. “This will go a long way towards enabling security personnel to focus on the events and activities that matter most,” says Laub.

Smaller organizations may have to move much more quickly and efficiently. Taking stock of the most sensitive information in a small or mid sized organization is doable, says Fasoo’s Arden.

“If you are a manufacturing company, the intellectual property around your designs and processes are the most critical pieces of information in your business, if you are a financial company it’s your customer records,” Arden says. “Think about securing that information with layers of encryption and security policies to guarantee that that information cannot leave your company.”

Some unstructured business data is benign and may not need to be locked down. “If I write you a memo that says, ‘We’re having a party tonight,’ that’s not a critical piece of information,” says Arden. “But a financial report or intellectual property or something related to healthcare or privacy, that’s probably something that you need to start thinking about locking down.”

3 Ways to Fix Operations Reports

In a prior life, I worked as a business systems analyst for a global hard drive manufacturer. After successfully navigating the Y2K crisis, we found ourselves inundated with custom report requests. We did an analysis and found that our enterprise system had more than 2,000 custom-coded operations reports, only 70 of which had been run in the last 90 days. Of course, the actively used operations reports were the source of endless user complaints and enhancement requests. That’s how we knew we had a good report: Complaints signaled actual use. Perhaps you’ve heard this broken record before; it happens everywhere.

It’s not hard to understand how this happens. A business person is trying to make a decision. Do I have enough resources? Are there bottlenecks I need to address? Was the process change I made last month effective? To guide the decision, she needs information, so she asks for a report. In the change request, she identifies data fields and recommends an output format. If the report is done well, it helps her make her decision. But that’s not the end of the story. Once the decision is made, the business person needs to make the next decision. Now that I know I need more resources, where should I position them? Last month’s process change wasn’t effective, so what can I do now? The old report becomes obsolete. The person needs another report (or an enhancement of the one requested). Rinse and repeat 20 times for 100 business users, and you get what we had: roughly 100 active reports and 1,900 inactive ones.

Let’s face it, operational reporting is like fighting a land war in Asia. There are no winners; there are only casualties. Although some reporting is unavoidable, there are three things you can do to drive improved business impact:

Get closer to the decision: Business users may request information, but they’re looking for advice. Put the effort into understanding the decisions they are trying to make. It will affect how you conceive your solution.

Apply the 20/80 rule: Providing information is an infinite and unending task. Put 20% of the effort to get 80% of the business value. Then take your savings and…

Invest in innovation: Stop reinforcing outdated paradigms. Columns and rows are food for machines, not humans. Data visualization, advanced analytics, social media and external data sources – the opportunities abound. Save some capacity to pursue them.

Operational reporting is a paradox: Business users sometimes get what they ask for, but they never get what they need. What they ask for is information; what they need is advice. The historical paradigm for reporting is primarily financial: a statement of fact, in a standard format, used by external parties to judge the quality of the company. A financial report has no associated internal decisions – the only purpose of a financial report is to state unadulterated fact. Operational reporting, on the other hand, is fundamentally about advice. A business person needs to make a decision to influence the financial outcomes. The facts are simply just a pit-stop on the journey toward a decision.

Is It Possible to Insure Bitcoin Technology?

In the mid- to late 1990s, the insurance industry was struggling with “the Y2K crisis,” not only in connection with its own systems but, more importantly, with the systems of all its policyholders. As the chief underwriting officer of one of the largest subsidiaries of one of the largest insurance companies in the world, AIG, I had to determine our potential exposure if the computer systems of our policyholders failed. My conclusion: hundreds of millions of dollars of potential liability payouts.

Y2K — a problem that threatened to confuse computers about chronology beginning on Jan. 1, 2000, because years had historically been represented in software with just two digits, meaning that the year 2000 (represented as “00”) was indistinguishable from 1900 (also “00”) — was the insurance industry’s introduction to the hazards of insuring technology. To reduce that exposure, we had to figure out a way to motivate our corporate policyholders to take reasonable steps to manage their Y2K problem. Because one of the central purposes of an insurance policy is to motivate specific risk-reducing behavior, such as wearing a seat beat, the question became how to motivate risk reduction in connection with the impending problem.

So we created “Y2K insurance” and made it available only to those companies that took the right steps.

Well, the Y2K crisis came and went, and the insurance industry was relatively unscathed. Whether the introduction of a new insurance product helped, we will never know. What we do know is that the Y2K experience inspired the insurance industry to contemplate other technology risks we might insure. In the year 2000, the answer was immediately clear: yhe Internet. Many of us realized that the Internet presented a permanent change in the sociological and economic system; that life would never be the same. But how does one insure a new technology and a completely new way of conducting business? It was scary thing to contemplate.

Fundamental to the insurance business is an analysis of historical actuarial information about frequency and severity of loss. We have decades of data on automobile accidents, broken down in every way imaginable. But how do you determine the right premium for a risk that has never existed?

For most carriers, the answer was, “You don’t.” But for a few, a different response emerged. A response that arose from a different culture—a risk-taking culture. A culture of innovation. “Cyber insurance” was born.

It took a while, but eventually we became comfortable with underwriting the frequency and severity of potential cyber attacks against our policyholders’ computer systems. Today, 15 years later, cyber insurance is a robust $1.3 billion industry, with more than 45 carriers providing some type of cyber insurance. And, despite the almost daily reports of cyber attacks, the industry is somehow making enough money to stick around.

Bitcoins 

Once again, the insurance industry is faced with a new risk in the technology space. Once again, the global economy is being transformed with a new way of conducting transactions. Once again, the insurance industry is faced with a dilemma: Do we ignore this new risk or face it head on?

There are more than 8 million Bitcoin “wallets” in existence today, and this is expected increase to 12 million by the end of the year. The total value of Bitcoins worldwide is around $4 billion. There are more than 100,000 Bitcoin transactions happening every day. More than 80,000 companies, from Microsoft to Dell to Expedia.com, accept Bitcoins as payment.

But how do you insure Bitcoins? More specifically, how do you insure the theft of the electronic private keys that are used to access Bitcoins? A smart insurer realizes that such a task is an exercise in both the familiar and the foreign. A private key is, after all, an electronic file. In many ways, the policies and procedures used in the network security space to protect any computer system holding any file are the same as those used to protect an electronic private key file. Equally true is that a good portion of private keys are stored in “cold storage,” meaning that they are not held in a computer that has access to the Internet. Some are actually stored in a bank vault. Storing valuables in a bank vault is also a well-understood risk and insurable. Finally, many companies that would be interested in purchasing Bitcoin theft insurance are themselves technology providers. Insurance for technology companies has existed for some time.

However, that’s where the analogy ends, and things begin to become difficult. First, the “cyber” insurance policies provided today actually do not insure the intrinsic value of the electronic file stolen. The policies do not cover the “value” of a Social Security number, for example. Furthermore, best practices in the securing of private keys in “hot storage” (computers connected to the Internet) rely upon the multisig, or multiple signature, technology, something with which insurance underwriters are generally unfamiliar. At best, underwriting the theft of Bitcoins requires coordination of multiple underwriting departments within an insurance company. More likely, it means creating new underwriting techniques and protocols.

Will the insurance industry be able to respond to the call? The insurance industry historically has not been known for innovation. So, how will we respond when we are faced with a new and potentially important risk, for which there is no historical actuarial data? Do we run away, or do we embrace a new need and a new opportunity as we did 15 years ago?

In February 2015, one company successfully designed the first true Bitcoin theft insurance policy along with a global “A”-rated insurance carrier for the benefit of BitGo, a leader of multi-sig technology. Will this policy be the only of its kind? Or, as with cyber insurance 15 years ago, will that be only the first of hundreds of thousands of “Bitcoin theft” policies.

Only time will tell.

Make Your Data a Work-in-Process Tool

Heard recently: “Our organization has lots of analytics, but we really don’t know what to do with them.”

This is a common dilemma. Analytics (data analysis) are abundant. They are presented in annual reports and published in colorful graphics. But too often the effort ends there. Nice information, but what can be done with it? 

The answer is: a lot. It can change operations and outcomes, but only if it is handled right. A key is producing an analytics delivery system that is self-documenting.

Data evolution

Obviously, the basic ingredient for analytics is data. Fortunately, the last 30 years have been primarily devoted to data gathering.

Over that time, all industries have evolved through several phases in data collection and management. Mainframe and minicomputers produced data, and, with the inception of the PC in the '80s, data gathering became the business of everyone. Systems were clumsy in the early PC years, and there were significant restrictions to screen real estate and data volume. Recall the Y2K debacle caused by limiting year data to two characters.

Happily for the data-gathering effort, progress in technology has been rapid. Local and then wide area networks became available. Then came the Internet, along with ever more powerful hardware. Amazingly, wireless smartphones today are far more powerful computers than were the PCs of the '80s and '90s. Data gathering has been successful.

Now we have truckloads of data, often referred to as big data. People are trying to figure out how to handle it. In fact, a whole new industry is developing around managing the huge volumes of data. Once big data is corralled, analytic possibilities are endless.

The workers’ compensation industry has collected enormous volumes of data — yet little has been done with analytics to reduce costs and improve outcomes.

Embed analytic intelligence

The best way to apply analytics in workers’ compensation is to create ways to translate and deliver the intelligence to the operational front lines, to those who make critical decisions daily. Knowledge derived from analytics cannot change processes or outcomes unless it is embedded in the work  of adjusters, medical case managers and others who make claims decisions.

Consulting graphics for guidance is cumbersome: Interpretation is uneven or unreliable, and the effects cannot be verified.  Therefore, the intelligence must be made easily accessible and specific to individual workers.

Front line decision-makers need online tools designed to easily access interpreted analytics that can direct decisions and actions. Such tools must be designed to target only the issues pertinent to individuals. Information should be specific.

When predictive modeling is employed as the analytic methodology, certain claims are identified as risky. Instead, all claims data should be monitored electronically and continuously. If all claims are monitored for events and conditions predetermined by analytics, no high-risk claims can slip through the cracks. Personnel can be alerted of all claims with risky conditions. 

Self-documenting

The system that is developed to deliver analytics to operations should automatically self-document; that is, keep its own audit trail to continually document to whom the intelligence was sent, when and why. The system can then be expanded to document what action is taken based on the information delivered.

Without self-documentation, the analytic delivery system has no authenticity. Those who receive the information cannot be held accountable for whether or how they acted on it. When the system automatically self-documents, those who have received the information can be held accountable or commended.

Self-documenting systems also create what could be called Additionality. Additionality is the extent to which a new input adds to the existing inputs without replacing them and results in something greater. When the analytic delivery system automatically self-documents guidance and actions, a new layer of information is created. Analytic intelligence is linked to claims data and layered with directed action documentation.

A system that is self-documenting can also self-verify, meaning results of delivering analytics to operations can be measured. Claim conditions and costs can be measured with and without the impact of the analytic delivery system. Further analyses can be executed to measure what analytic intelligence is most effective and in what form and, importantly, what actions generate best results.

The analytic delivery system monitors all claims data, identifies claims that match analytic intelligence and embeds the interpreted information in operations. The data has become a work-in-process knowledge tool while analytics are linked directly to outcomes.