Tag Archives: Boeing

What Ethiopia Crash Says About Safety

When news broke about the crash of an Ethiopian Airlines Boeing 737, the first question that popped into my head was whether an older 737 model, still using the flawed rudder actuator, might have been involved.

Of course, it was actually the newest iteration of the 737, the Max 8. I’m no longer covering aviation. But having chronicled the saga of the 737 flawed rudder design, which Boeing ultimately replaced, here is what I’m wondering:

  • I wonder if this will turn out to be yet another in a long line of the manufacturer or the airline pushing the edge of the safety envelope, for commercial reasons, with a catastrophic result that should have been anticipated and accounted for.
  • I wonder if there is a trail of maintenance records of related, precursor glitches occurring in the Max 8 fleet.
  • I wonder how rigorous the FAA was in vetting and approving the safety margins for the advanced functions in the Max 8’s complex, automated controls intended to extend the range and capacity of not just the Max 8 but also other 737 models now routinely being used on long-range flights, including from the U.S. mainland to my home state of Hawaii.

If there is any evidence of the steady thinning of the 737’s safety margin translating into operational hiccups that point to the Ethiopian Airlines catastrophe, it should exist in the FAA Service Difficulty Reports airlines are required to file.

This is likely where plaintiff attorneys representing victims will hunt — for leverage to win claims for their clients.  However, with so much at stake, it wouldn’t surprise me if there’s a big push by the defendant attorneys representing Boeing and the airline to settle all victims claims quickly for higher-than-normal amounts, thus shutting down the plaintiff attorneys.  This is what happened in the Lauda Air 767 crash in Thailand, caused by a malfunctioning thrust reverser.

See also: New Risks Coming From Innovation  

Boeing launched the 737 in the 1960s as a small, short-haul transport under intense competitive pressure from McDonnell Douglas’ hot selling DC-8. Competitive pressure drove Boeing to persuade the FAA to relax rules limiting the use of twin jets for very long overseas flights, first to enable trans-oceanic 777 and 787 flights, and then trans-oceanic 737 flights.

The frequency of major air disasters has been at a publicly acceptable level for a long time. But this disaster shows the safety margin of “smart” jet transports needs more attention. The grounding of Max 8s reinforces that notion. I hope regulators and the industry honor the 157 lives lost on the Ethiopia Air flight and address the systemic factors, and well as the specific cause, that precipitated this tragedy.

This article first appeared here.

Strategist’s Guide to Artificial Intelligence

Jeff Heepke knows where to plant corn on his 4,500-acre farm in Illinois because of artificial intelligence (AI). He uses a smartphone app called Climate Basic, which divides Heepke’s farmland (and, in fact, the entire continental U.S.) into plots that are 10 meters square. The app draws on local temperature and erosion records, expected precipitation, soil quality and other agricultural data to determine how to maximize yields for each plot. If a rainy cold front is expected to pass by, Heepke knows which areas to avoid watering or irrigating that afternoon. As the U.S. Department of Agriculture noted, this use of artificial intelligence across the industry has produced the largest crops in the country’s history.

Climate Corp., the Silicon Valley–based developer of Climate Basic, also offers a more advanced AI app that operates autonomously. If a storm hits a region, or a drought occurs, it lowers local yield numbers. Farmers who have bought insurance to supplement their government coverage get a check; no questions asked, no paper filing necessary. The insurance companies and farmers both benefit from having a much less labor-intensive, more streamlined and less expensive automated claims process.

Monsanto paid nearly $1 billion to buy Climate Corp. in 2013, giving the company’s models added legitimacy. Since then, Monsanto has continued to upgrade the AI models, integrating data from farm equipment and sensors planted in the fields so that they improve their accuracy and insight as more data is fed into them. One result is a better understanding of climate change and its effects — for example, the northward migration of arable land for corn, or the increasing frequency of severe storms.

Applications like this are typical of the new wave of artificial intelligence in business. AI is generating new approaches to business models, operations and the deployment of people that are likely to fundamentally change the way business operates. And if it can transform an earthbound industry like agriculture, how long will it be before your company is affected?

An Unavoidable Opportunity

Many business leaders are keenly aware of the potential value of artificial intelligence but are not yet poised to take advantage of it. In PwC’s 2017 Digital IQ survey of senior executives worldwide, 54% of the respondents said they were making substantial investments in AI today. But only 20% said their organizations had the skills necessary to succeed with this technology (see “Winning with Digital Confidence,” by Chris Curran and Tom Puthiyamadam).

Reports on artificial intelligence tend to portray it as either a servant, making all technology more responsive, or an overlord, eliminating jobs and destroying privacy. But for business decision makers, AI is primarily an enabler of productivity. It will eliminate jobs, to be sure, but it will also fundamentally change work processes and might create jobs in the long run. The nature of decision making, collaboration, creative art and scientific research will all be affected; so will enterprise structures. Technological systems, including potentially your products and services, as well as your office and factory equipment, will respond to people (and one another) in ways that feel as if they are coming to life.

In their book Artificial Intelligence: A Modern Approach (Pearson, 1995), Stuart Russell and Peter Norvig define AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment.” The most critical difference between AI and general-purpose software is in the phrase “take actions.” AI enables machines to respond on their own to signals from the world at large, signals that programmers do not directly control and therefore can’t anticipate.

The fastest-growing category of AI is machine learning, or the ability of software to improve its own activity by analyzing interactions with the world at large (see “The Road to Deep Learning,” below). This technology, which has been a continual force in the history of computing since the 1940s, has grown dramatically in sophistication during the last few years.

See also: Seriously? Artificial Intelligence?  

The Road to Deep Learning

This may be the first moment in AI’s history when a majority of experts agree the technology has practical value. From its conceptual beginnings in the 1950s, led by legendary computer scientists such as Marvin Minsky and John McCarthy, its future viability has been the subject of fierce debate. As recently as 2000, the most proficient AI system was roughly comparable, in complexity, to the brain of a worm. Then, as high-bandwidth networking, cloud computing, and high-powered graphics-enabled microprocessors emerged, researchers began building multilayered neural networks — still extremely slow and limited in comparison with natural brains, but useful in practical ways.

The best-known AI triumphs — in which software systems beat expert human players in Jeopardy, chess, Go, poker and soccer — differ from most day-to-day business applications. These games have prescribed rules and well-defined outcomes; every game ends in a win, loss or tie. The games are also closed-loop systems: They affect only the players, not outsiders. The software can be trained through multiple failures with no serious risks. You can’t say the same of an autonomous vehicle crash, a factory failure or a mistranslation.

There are currently two main schools of thought on how to develop the inference capabilities necessary for AI programs to navigate through the complexities of everyday life. In both, programs learn from experience — that is, the responses and reactions they get influence the way the programs act thereafter. The first approach uses conditional instructions (also known as heuristics) to accomplish this. For instance, an AI bot would interpret the emotions in a conversation by following a program that instructed it to start by checking for emotions that were evident in the recent past.

The second approach is known as machine learning. The machine is taught, using specific examples, to make inferences about the world around it. It then builds its understanding through this inference-making ability, without following specific instructions to do so. The Google search engine’s “next-word completion” feature is a good example of machine learning. Type in the word artificial, and several suggestions for the next word will appear, perhaps intelligence, selection and insemination. No one has programmed the search engine to seek those complements. Google chose the strategy of looking for the three words most frequently typed after artificial. With huge amounts of data available, machine learning can provide uncanny accuracy about patterns of behavior.

The type of machine learning called deep learning has become increasingly important. A deep learning system is a multilayered neural network that learns representations of the world and stores them as a nested hierarchy of concepts many layers deep. For example, when processing thousands of images, it recognizes objects based on a hierarchy of simpler building blocks: straight lines and curved lines at the basic level, then eyes, mouths and noses, and then faces, and then specific facial features. Besides image recognition, deep learning appears to be a promising way to approach complex challenges such as speech comprehension, human-machine conversation, language translation and vehicle navigation (see Exhibit A).

Though it is the closest machine to a human brain, a deep learning neural network is not suitable for all problems. It requires multiple processors with enormous computing power, far beyond conventional IT architecture; it will learn only by processing enormous amounts of data; and its decision processes are not transparent.

News aggregation software, for example, had long relied on rudimentary AI to curate articles based on people’s requests. Then it evolved to analyze behavior, tracking the way people clicked on articles and the time they spent reading, and adjusting the selections accordingly. Next it aggregated individual users’ behavior with the larger population, particularly those who had similar media habits. Now it is incorporating broader data about the way readers’ interests change over time, to anticipate what people are likely to want to see next, even if they have never clicked on that topic before. Tomorrow’s AI aggregators will be able to detect and counter “fake news” by scanning for inconsistencies and routing people to alternative perspectives.

AI applications in daily use include all smartphone digital assistants, email programs that sort entries by importance, voice recognition systems, image recognition apps such as Facebook Picture Search, digital assistants such as Amazon Echo and Google Home and much of the emerging Industrial Internet. Some AI apps are targeted at minor frustrations — DoNotPay, an online legal bot, has reversed thousands of parking tickets — and others, such as connected car and language translation technologies, represent fundamental shifts in the way people live. A growing number are aimed at improving human behavior; for instance, GM’s 2016 Chevrolet Malibu feeds data from sensors into a backseat driver–like guidance system for teenagers at the wheel.

Despite all this activity, the market for AI is still small. Market research firm Tractica estimated 2016 revenues at just $644 million. But it expects hockey stick-style growth, reaching $15 billion by 2022 and accelerating thereafter. In late 2016, there were about 1,500 AI-related startups in the U.S. alone, and total funding in 2016 reached a record $5 billion. Google, Facebook, Microsoft, Salesforce.com and other tech companies are snapping up AI software companies, and large, established companies are recruiting deep learning talent and, like Monsanto, buying AI companies specializing in their markets. To make the most of this technology in your enterprise, consider the three main ways that businesses can or will use AI:

  • Assisted intelligence, now widely available, improves what people and organizations are already doing.
  • Augmented intelligence, emerging today, enables organizations and people to do things they couldn’t otherwise do.
  • Autonomous intelligence, being developed for the future, creates and deploys machines that act on their own.

See also: Is AI the End of Jobs or a Beginning?  

Many companies will make investments in all three during the next few years, drawing from a wide variety of applications (see Exhibit 1). They complement one another but require different types of investment, different staffing considerations and different business models.

Assisted Intelligence

Assisted intelligence amplifies the value of existing activity. For example, Google’s Gmail sorts incoming email into “Primary,” “Social” and “Promotion” default tabs. The algorithm, trained with data from millions of other users’ emails, makes people more efficient without changing the way they use email or altering the value it provides.

Assisted intelligence tends to involve clearly defined, rules-based, repeatable tasks. These include automated assembly lines and other uses of physical robots; robotic process automation, in which software-based agents simulate the online activities of a human being; and back-office functions such as billing, finance and regulatory compliance. This form of AI can be used to verify and cross-check data — for example, when paper checks are read and verified by a bank’s ATM. Assisted intelligence has already become common in some enterprise software processes. In “opportunity to order” (basic sales) and “order to cash” (receiving and processing customer orders), the software offers guidance and direction that was formerly available only from people.

The Oscar W. Larson Co. used assisted intelligence to improve its field service operations. This is a 70-plus-year-old family-owned general contractor, which, among other services to the oil and gas industry, provides maintenance and repair for point-of-sales systems and fuel dispensers at gas stations. One costly and irritating problem is “truck rerolls”: service calls that have to be rescheduled because the technician lacks the tools, parts or expertise for a particular issue. After analyzing data on service calls, the AI software showed how to reduce truck rerolls by 20%, a rate that should continue to improve as the software learns to recognize more patterns.

Assisted intelligence apps often involve computer models of complex realities that allow businesses to test decisions with less risk. For example, one auto manufacturer has developed a simulation of consumer behavior, incorporating data about the types of trips people make, the ways those affect supply and demand for motor vehicles and the variations in those patterns for different city topologies, marketing approaches and vehicle price ranges. The model spells out more than 200,000 variations for the automaker to consider and simulates the potential success of any tested variation, thus assisting in the design of car launches. As the automaker introduces cars and the simulator incorporates the data on outcomes from each launch, the model’s predictions will become ever more accurate.

AI-based packages of this sort are available on more and more enterprise software platforms. Success with assisted intelligence should lead to improvements in conventional business metrics such as labor productivity, revenues or margins per employee and average time to completion for processes. Much of the cost involved is in the staff you hire, who must be skilled at marshaling and interpreting data. To evaluate where to deploy assisted intelligence, consider two questions: What products or services could you easily make more marketable if they were more automatically responsive to your customers? Which of your current processes and practices, including your decision-making practices, would be more powerful with more intelligence?

Augmented Intelligence

Augmented intelligence software lends new capability to human activity, permitting enterprises to do things they couldn’t do before. Unlike assisted intelligence, it fundamentally alters the nature of the task, and business models change accordingly.

For example, Netflix uses machine learning algorithms to do something media has never done before: suggest choices customers would probably not have found themselves, based not just on the customer’s patterns of behavior but on those of the audience at large. A Netflix user, unlike a cable TV pay-per-view customer, can easily switch from one premium video to another without penalty, after just a few minutes. This gives consumers more control over their time. They use it to choose videos more tailored to the way they feel at any given moment. Every time that happens, the system records that observation and adjusts its recommendation list — and it enables Netflix to tailor its next round of videos to user preferences more accurately. This leads to reduced costs and higher profits per movie, and a more enthusiastic audience, which then enables more investments in personalization (and AI). Left outside this virtuous circle are conventional advertising and television networks. No wonder other video channels, such as HBO and Amazon, as well as recorded music channels such as Spotify, have moved to similar models.

Over time, as algorithms grow more sophisticated, the symbiotic relationship between human and AI will further change entertainment industry practices. The unit of viewing decision will probably become the scene, not the story; algorithms will link scenes to audience emotions. A consumer might ask to see only scenes where a Meryl Streep character is falling in love, or to trace a particular type of swordplay from one action movie to another. Data accumulating from these choices will further refine the ability of the entertainment industry to spark people’s emotions, satisfy their curiosity and gain their loyalty.

Another current use of augmented intelligence is in legal research. Though most cases are searchable online, finding relevant precedents still requires many hours of sifting through past opinions. Luminance, a startup specializing in legal research, can run through thousands of cases in a very short time, providing inferences about their relevance to a current proceeding. Systems like these don’t yet replace human legal research. But they dramatically reduce the rote work conducted by associate attorneys, a job rated as the least satisfying in the U.S. Similar applications are emerging for other types of data sifting, including financial audits, interpreting regulations, finding patterns in epidemiological data and (as noted above) farming.

To develop applications like these, you’ll need to marshal your own imagination to look for products, services or processes that would not be possible at all without AI. For example, an AI system can track a wide number of product features, warranty costs, repeat purchase rates and more general purchasing metrics, bringing only unusual or noteworthy correlations to your attention. Are a high number of repairs associated with a particular region, material or line of products? Could you use this information to redesign your products, avoid recalls or spark innovation in some way?

The success of an augmented intelligence effort depends on whether it has enabled your company to do new things. To assess this capability, track your margins, innovation cycles, customer experience and revenue growth as potential proxies. Also watch your impact on disruption: Are your new innovations doing to some part of the business ecosystem what, say, ride-hailing services are doing to conventional taxi companies?

You won’t find many off-the-shelf applications for augmented intelligence. They involve advanced forms of machine learning and natural language processing, plus specialized interfaces tailored to your company and industry. However, you can build bespoke augmented intelligence applications on cloud-based enterprise platforms, most of which allow modifications in open source code. Given the unstructured nature of your most critical decision processes, an augmented intelligence application would require voluminous historical data from your own company, along with data from the rest of your industry and related fields (such as demographics). This will help the system distinguish external factors, such as competition and economic conditions, from the impact of your own decisions.

The greatest change from augmented intelligence may be felt by senior decision makers, as the new models often give them new alternatives to consider that don’t match their past experience or gut feelings. They should be open to those alternatives, but also skeptical. AI systems are not infallible; just like any human guide, they must show consistency, explain their decisions and counter biases, or they will lose their value.

Autonomous Intelligence

Very few autonomous intelligence systems — systems that make decisions without direct human involvement or oversight — are in widespread use today. Early examples include automated trading in the stock market (about 75% of Nasdaq trading is conducted autonomously) and facial recognition. In some circumstances, algorithms are better than people at identifying other people. Other early examples include robots that dispose of bombs, gather deep-sea data, maintain space stations and perform other tasks inherently unsafe for people.

The most eagerly anticipated forms of autonomous intelligence — self-driving cars and full-fledged language translation programs — are not yet ready for general use. The closest autonomous service so far is Tencent’s messaging and social media platform WeChat, which has close to 800 million daily active users, most of them in China. The program, which was designed primarily for use on smartphones, offers relatively sophisticated voice recognition, Chinese-to-English language translation, facial recognition (including suggestions of celebrities who look like the person holding the phone) and virtual bot friends that can play guessing games. Notwithstanding their cleverness and their pioneering use of natural language processing, these are still niche applications, and still very limited by technology. Some of the most popular AI apps, for example, are small, menu- and rule-driven programs, which conduct fairly rudimentary conversations around a limited group of options.

See also: Machine Learning to the Rescue on Cyber?  

Despite the lead time required to bring the technology further along, any business prepared to base a strategy on advanced digital technology should be thinking seriously about autonomous intelligence now. The Internet of Things will generate vast amounts of information, more than humans can reasonably interpret. In commercial aircraft, for example, so much flight data is gathered that engineers can’t process it all; thus, Boeing has announced a $7.5 million partnership with Carnegie Mellon University, along with other efforts to develop AI systems that can, for example, predict when airplanes will need maintenance. Autonomous intelligence’s greatest challenge may not be technological at all — it may be companies’ ability to build in enough transparency for people to trust these systems to act in their best interest.

First Steps

As you contemplate the introduction of artificial intelligence, articulate what mix of the three approaches works best for you.

  • Are you primarily interested in upgrading your existing processes, reducing costs and improving productivity? If so, then start with assisted intelligence, probably with a small group of services from a cloud-based provider.
  • Do you seek to build your business around something new — responsive and self-driven products, or services and experiences that incorporate AI? Then pursue an augmented intelligence approach, probably with more complex AI applications resident on the cloud.
  • Are you developing a genuinely new technology? Most companies will be better off primarily using someone else’s AI platforms, but, if you can justify building your own, you may become one of the leaders in your market.

The transition among these forms of AI is not clean-cut; they sit on a continuum. In developing their own AI strategy, many companies begin somewhere between assisted and augmented, while expecting to move toward autonomous eventually (see Exhibit 2).

Though investments in AI may seem expensive now, the costs will decline over the next 10 years as the software becomes more commoditized. “As this technology continues to mature,” writes Daniel Eckert, a managing director in emerging technology services for PwC US, “we should see the price adhere toward a utility model and flatten out. We expect a tiered pricing model to be introduced: a free (or freemium model) for simple activities, and a premium model for discrete, business-differentiating services.”

AI is often sold on the premise that it will replace human labor at lower cost — and the effect on employment could be devastating, though no one knows for sure. Carl Benedikt Frey and Michael Osborne of Oxford University’s engineering school have calculated that AI will put 47% of the jobs in the U.S. at risk; a 2016 Forrester research report estimated it at 6%, at least by 2025. On the other hand, Baidu Research head (and deep learning pioneer) Andrew Ng recently said, “AI is the new electricity,” meaning that it will be found everywhere and create jobs that weren’t imaginable before its appearance.

At the same time that AI threatens the loss of an almost unimaginable number of jobs, it is also a hungry, unsatisfied employer. The lack of capable talent — people skilled in deep learning technology and analytics — may well turn out to be the biggest obstacle for large companies. The greatest opportunities may thus be for independent businesspeople, including farmers like Jeff Heepke, who no longer need scale to compete with large companies, because AI has leveled the playing field.

It is still too early to say which types of companies will be the most successful in this area — and we don’t yet have an AI model to predict it for us. In the end, we cannot even say for sure that the companies that enter the field first will be the most successful. The dominant players will be those that, like Climate Corp., Oscar W. Larson, Netflix and many other companies large and small, have taken AI to heart as a way to become far more capable, in a far more relevant way, than they otherwise would ever be.

The Dark Side of Rapid Change

Global trade and investment have been great engines of progress for much of the world. Over the past two decades, poorer countries reduced the gap between themselves and their richer counterparts for the first time since the Industrial Revolution, in no small part because of the opportunities opened by global trade. Technology has the same transformative potential in industries as varied as energy, health care, transportation and education. Inventions that are imminent or already here could transform the lives of billions of people for the better.

Yet, as we see in the 2016 U.S. election campaign, and as we have seen in Europe and elsewhere, rapid change has a dark side. If too many people are unable to adapt quickly and successfully to these changes, they will push back – blaming trade or immigrants or the elites – and demand a reversion to a simpler time.

The task of governments is to help people manage these transformations so that they benefit many and do as little harm as possible. In the U.S., governments mostly failed at that task during the era of globalization; if the full benefits of the coming technologies are to be enjoyed, governments will have to do much better this time around.

See also: ‘Interactive Finance’: Meshing with Google  

The competitive pressures created by globalization should have been no surprise. About 45 years ago, President Richard Nixon’s top international economic adviser, Pete Peterson, warned him that rising competition from Japan and Germany, with much more on the way, “poses adjustment policy which simply cannot be ignored.”

Americans have unquestionably gained by the lower prices and higher quality that import competition enabled. Apple iPhones and the latest Boeing jets are the result of the collective input of tens of thousands of collaborators in dozens of countries around the world. But many lost well-paid manufacturing jobs to import competition or outsourcing, and the U.S. government has made little effort to mitigate those costs, even in worker retraining.

President John F. Kennedy promised in 1962 that the government would help American workers who lost out to trade competition as the U.S. lowered its barriers to imports. “When considerations of national policy make it desirable to avoid higher tariffs, those injured by the competition should not be required to bear the full brunt of the impact,” he said. But today, the U.S. spends a smaller proportion of its wealth on worker retraining than any of the other 34 member countries of the Organization for Economic Co-operation and Development except for Mexico and Chile.

Too often, the attitude of the U.S. government has been deeply irresponsible, assuming that markets would simply sort everything out for the best. In the long run, everybody may end up with work and income, but, in the short run, as Peterson told Nixon, the failure to help Americans adapt to the new reality will “leave long periods when the transition is painful beyond endurance.”

With technology change, too, we know well in advance exactly what is coming. Driverless technology, for example, will soon become the standard in the trucking industry. Driverless trucks can run 24 hours a day and won’t demand overtime pay. There are 3.5 million truck drivers in the U.S., and an additional 5.5 million jobs in related industries – roughly one in every 15 American workers. They could perhaps go to work for UPS or deliver pizzas, but many of those delivery jobs will be lost to drones.

Personal-care robots will increasingly replace home healthcare aides, and self-checkout machines are already replacing retail-store clerks; these are jobs that filled some of the gap left by the disappearance of manufacturing jobs to global competition, but they, too, will soon be under siege. Automation is even hitting law and education, two sectors long thought immune to technological substitution.

See also: How Technology Breaks Down Silos  

These vulnerabilities necessitate something that too often was absent in the era of globalization: good public policies. Artificial intelligence will transform teaching, for example, but, without access to the highest-speed broadband, students in poor and rural areas will fall further behind their urban counterparts. And unless we strengthen social safety nets and retraining schemes, there will be far too many losers in the labor market. There is no way to avoid the huge impact that technology will have on employment; we have to prepare for it and help those whose skills it antiquates.

Much more even than globalization, technology is going to create upheaval and destroy industries and jobs. This can be for the better, helping us create more interesting jobs or freeing up time for leisure and artistic pursuits. But unless we find ways to share the prosperity and help Americans adapt to the coming changes, many could be left worse off than they are. And, as we have seen this year, that is a recipe for an angry backlash—and political upheaval.

This article was written with Edward Alden.

The Mechanics of Blockchains

Blockchain technology is like a three-trick pony. It essentially combines three slightly clumsy computer tricks to mimic decisions that a human administrator routinely makes. The difference is that, if done correctly, the computer can perform some of these decisions with great speed, accuracy and scalability. The peril is that, if done incorrectly, the computer can propagate an incorrect outcome with the same stunning efficiency.

1: The Byzantine General’s Dilemma

A scenario first described in 1982 at SRI International models the first trick. This problem simulation refers to a hypothetical group of military generals, each commanding a portion of the Byzantine Army, who have encircled a city that they intend to conquer. They have determined that: 1. They all must attack together, or 2. They all must retreat together. Any other combination would result in annihilation.

The problem is complicated by two conditions: 1. There may be one or more traitors among the leadership, 2. The messengers carrying the votes about whether to attack or retreat are subject to being intercepted. So, for instance, a traitorous general could send a tie-breaking vote in favor of attack to those who support the attack, and a no vote to those who support a retreat, intentionally causing disunity and a rout.

See also: Can Blockchains Be Insured?  

A Byzantine Fault Tolerant system may be achieved with a simple test for unanimity. After the vote is called, each general then “votes on the vote,” verifying that their own vote was registered correctly. The second vote must be unanimous. Any other outcome would trigger a default order to retreat.

Modern examples of Byzantine Fault Tolerant Systems:

The analogy for networks is that computers are the generals and the instruction “packet” is the messenger. To secure the general is to secure the system. Similar strategies are commonplace in engineering applications from aircraft to robotics to any autonomous vehicle where computers vote, and then “vote on the vote.” The Boeing 777 and 787 use byzantine proof algorithms that convert environmental data to movements of, say, a flight control surface. Each is clearly insurable in a highly regulated industry of commercial aviation. So this is good news for blockchains.

2: Multi-Key Cryptography

While the Byzantine Fault Tolerant strategy is useful for securing the nodes in a network (the generals), multi-key cryptography is for securing the packets of information that they exchange. On a decentralized ledger, it is important that the people who are authorized to access information and the people who are authorized to send the information are secured. It is also important that the information cannot be tampered with in transit. Society now expends a great deal of energy in bureaucratic systems that perform these essential functions to prevent theft, fraud, spoofing and malicious attacks. Trick #2 allows this to be done with software.

Assume for a moment that a cryptographic key is like any typical key for opening locks. The computer can fabricate sets of keys that recognize each other. Each party to the transaction has a public key and a private key. The public key may be widely distributed because it is indiscernible by anyone without the related private key.

Suppose that Alice has a secret to share with Bob. She can put the secret in a little digital vault and seal it using her private key + Bob’s public key. She then sends the package to Bob over email. Bob can open the packet with his private key + Alice’s public key. This ensures that the sender and receiver are both authorized and that the package is secured during transit.

3: The Time Keeper

Einstein once said, the only reason for time is so that everything doesn’t happen at once. There are several ways to establish order in a set of data. The first is for everyone to synchronize their clocks relative to Greenwich, England, and embed each and every package with dates of creation, access records, revisions, dates of exchange, etc. Then we must try to manage these individual positions, revisions and copies moving through digital space and time.

The other way is to create a moving background (like in the old TV cartoons) and indelibly attach the contracts as the background passes by. To corrupt one package, you would need to hijack the whole train. The theory is that it would be prohibitively expensive, far in excess of the value of the single package, to do so.

Computer software of the blockchain performs the following routine to accomplish the effective equivalent process: Consider for a moment a long line of bank vaults. Inside each vault is the key or combination to the vault immediately to the right. There are only two rules: 1. Each key can only be used once, and 2. No two vaults can be open at the same time. Acting this out physically is a bit of a chore, but security is assured, and there is no way to go backwards to corrupt the earlier frames. The only question now is: Who is going to perform this chore for the benefit of everyone else, and why?

Finally, here is why the coin is valuable

There are several ways to push this train along. Bitcoin uses something called a proof-of-work algorithm. Rather than hiding the combinations inside each vault, a bunch of computers in a worldwide network all compete to guess the combination to the lock by solving a puzzle that is difficult to crack but easy to verify. It’s like solving a Rubik Cube; the task is hard to do, but everyone can easily see a solution – that is sufficient proof that work has been done and therefore the solved block is unique and valid, thereby establishing consensus.

See also: Blockchain: No More Double-Entry Books?

Whoever solves the puzzle is awarded electronic tokens called bitcoin (with a lower case b). This is sort of like those little blue ticket that kids get at the arcade and can be exchanged for fun prizes on the way out. These bitcoins simply act as an incentive for people to run computers that solve puzzles that keep the train rolling.

Bitcoins (all crypto currencies) MUST have value, because, if they did not, their respective blockchain would stop cold.

A stalled blockchain would be the crypto-currency equivalent of bankruptcy. This may account for some amount of hype-fueled speculation surrounding the value of such digital tokens. Not surprisingly, the higher the price, the better the blockchain operates.

While all of this seems a bit confusing, keep in mind that we are describing the thought patterns of a computer, not necessarily a human.

The important thing is that we can analyze the mathematics. From an insurability standpoint, most of the essential ingredients needed to offer blockchain-related insurance products exist as follows.

1. The insurer can identify the risk exposures associated with generals, traitors, locks, vaults, trains and puzzles.

2. The insurer can calculate probability of failure by observing:

  • The degree of Byzantine fault tolerance.
  • The strength of the cryptography
  • The relative value of the coins (digital tokens)

3. The consequences of failure are readily foreseeable by traditional accounting where the physical nature of the value can be assessed, such as a legal contract.

We can therefore conclude that each of the tricks performed by this fine little pony are individually insurable. Therefore, the whole rodeo is also insurable if, and only if, full transparency is provided to all stakeholders and the contract has physical implications.

Markets are most efficient when everyone has equal access to information – the same is essential for blockchains. So much so that any effort to control decentralized networks may, in fact, render the whole blockchain uninsurable. It is fundamentally important that the insurer is vigilant toward the mechanics of the blockchain enterprise that they seek to insure, especially where attempting to apply blockchain to its own internal processes.

Adapted from: Insurance: The Highest and Best Use of Blockchain Technology, July 2016 National Center for Insurance Policy and Research/National Association of Insurance Commissioners Newsletter: http://www.naic.org/cipr_newsletter_archive/vol19_blockchain.pdf

Healthcare’s Lessons for Workers’ Comp

The healthcare industry is going through seismic changes today as it tries to control costs while providing the best care possible to all patients. In workers’ compensation, the changes in healthcare are affecting us in ways we may not recognize. It behooves us to examine what’s occurring on the broader stage of healthcare and what we might learn from the great healthcare experiment that will help us improve workers’ compensation.

During the recent National Workers’ Compensation & Disability Conference (NWCDC) in Las Vegas, a panel of workers’ compensation professionals comprising me, Kimberly George (senior vice president and senior healthcare adviser of Sedgwick Claims Management Services) and Lisa Kelly (senior workers’ compensation manager for Boeing), discussed this very topic: healthcare transformation and how it can help workers’ compensation achieve better outcomes and risk management.

What is happening in healthcare that can affect workers’ compensation?

  • The drive to accountable care. This term refers to providers being “accountable” for the outcomes of the healthcare they deliver – not just for providing the services. “Accountable care organizations” of providers have been created and have also given rise to other configurations such as medical homes – centralizing patients’ care through the primary care physician.
  • Integration of care. There is broad recognition that when services are integrated between facilities, specialties and technology, it is finally possible to deliver truly coordinated care and reap the benefits of improved quality, safety and efficiency. With integrated care, from the onset of a patient’s health episode, all clinical teams are able to communicate, monitor and track the patient’s progress.
  • Pay-for-value versus pay-for-service. Healthcare payers are shifting to payment models that reward higher-quality care and better outcomes, vs. the old fee-for-service model that paid for each transaction.

While there is no indication that our state-mandated workers’ compensation system is moving toward a pay-for-value model at this point, there is a growing awareness and movement toward recognizing the value of integrating care with high-performing physicians and linking services through technology and care coordination to achieve a more efficient and effective treatment plan and a faster return-to-work. It is this area in which we can immediately move workers’ compensation medical management forward. Indeed, that movement is already occurring.

Curing the Patient, Curing the System

Traditionally, workers’ compensation focuses on getting injured workers to the closest provider, instead of the one that delivers the best patient experience and produces the best outcomes. For years, payers have wondered, “Who are the best doctors, and how do I get my injured workers to them?”

Physician scorecards (measuring the outcomes through the life of a claim tied to the treating physician) provide the answer.

Physician scorecards identify physicians who produce superior outcomes at less cost. During a five-year period, a Harbor Health Systems program found that physicians with superior outcomes reduced medical costs by an average of 20%. Previous studies have shown that treatment by these physicians also shortens the duration of the claim and reduces indemnity costs.

The discussion at NWCDC shared the latest data about the results from using these best-in-class physicians, and what we have discovered matters:

  • Recent results document that the higher-ranked physicians produce significantly lower duration of claims, lower claims costs, lower litigation rates, fewer TTD (temporary total disability) days, lower indemnity costs and lower reopening rates.
  • There is a striking difference between one-star physicians and five-star physicians within the workers’ compensation industry.
  • One-star primary care physicians (lowest score being one, highest score being five) had an average cost of $244,246 per claim, while five-star physicians had reduced the cost to $15,196 per claim. This data supports the concept that getting appropriate treatment faster and eliminating unnecessary care saves money on the claims side while getting an injured worker recovered and returning to work faster.
  • With primary care physicians treating injured workers, the average duration of a claim (in days) for five-star physicians was 263; for one-star primary care physicians, the average claim duration amounted to a staggering 2,389 days.
  • The difference in indemnity costs was eye-opening, as well: With five-star primary care physicians, indemnity costs were approximately $5,433. With one-star physicians, indemnity costs skyrocketed to $75,829.

What’s Next?

The ability to use the best physicians for injured workers and to link together superior providers throughout the continuum of care, integrated by technologically enabled communications, is the new goal for workers’ compensation.

The technology now exists to accurately and effectively measure claims outcomes by physician, to get injured workers to see these physicians quickly, to link rapidly with best-in-class ancillary providers and to power the systems to keep the care plan on track for a fast, safe recovery.

Mere cost containment is no longer enough. Workers’ compensation professionals can and must work together to achieve better outcomes – for our organizations and, most importantly, for injured workers. If we focus on curing the patient, we will cure the system, as well.