'Law of Computability' Powers the Bionic Era

The bionic era automates symbolic work – perceiving and judging – and blends powerfully with the industrial era's automation of physical work.

The news is filled with stories about advanced applications of technology, from the Internet of Things, satellite data, sensors and drones, to augmented reality, artificial intelligence, neural networks and machine learning. Each of these applications comes with its own body of research, corporate promoters and analyst predictions, making them seem unrelated. They aren’t. We are in a new era, which I call the Bionic Era, and it’s as powerful and pervasive as the industrial era before it. In the bionic era, we have a new blend of people and machines both doing and thinking together. Where the industrial era automated physical work, the bionic era automates symbolic work – thinking, perceiving and judging, blending with the advances of the industrial era in a dynamic and powerful way.

This bionic era changes the nature of economic returns and the very forms of capital that underpin our economy.

Modern life is so replete with the dance among people and our machines (cars, dishwashers, phones, etc.) that to observe their dynamic co- habitation is a bold assertion of the obvious. Yet, in the bionic era, which parts of life – especially which types of thinking and decision making are susceptible to computability – and when, is a tricky and important question to answer. Answering this query can give insight into questions like: Is my job safe? Will my company be put out of business? Which military powers will have the upper hand in the future?

The concept of computability can help us navigate.

What is computability?

A task is computable if it is highly digitized and there is a high level of knowledge about it, a relationship I refer to as the Law of Computability:

Law of Computability (LoC): Computability = Degree of Digitization of the Phenomenon * Level of Knowledge of the Phenomenon

I use the term "computability" instead of "automation" because with physical automation there is no digital description of the task. A washing machine automates the most labor- intensive part of doing laundry, but making it computable would require a digital twin of the washing machine to drive the operation and record details about all the elements that affect it, such as water, temperature, soap, agitation speed, the size of the motor, etc.

For a specific example of how the general phenomenon of the Law of Computability applies in the present day, considering the rapid advance of the self-driving car. Engineers believed they had a high degree of knowledge about the task of driving when Google set out to design a commercially viable self-driving vehicle, according to Chris Urmson, the first lead engineer for the Google car project. Engineers had deep understanding about the physics of car movement, location, function and behavior.

Early self-driving prototypes were also highly digitized, with multiple inputs including GPS, digital maps, onboard sensor equipment and massive amounts of computer processing power. Total cost for all that hardware: $250,000. Yet that combination of digitization and knowledge still resulted in a three-foot margin of error -- far too large for safety. Eventually, Urmson’s team realized that, while they understood how the car works and had extensive digital information about it, the environment through which the car drives is not equally digitized. Nor did they know as much as they needed to about it because environments are also always changing (one minute the cross-walk has a kid in it, the next it doesn’t).

Google engineers filled that gap by adding LiDAR laser surveying technology to the top of the car. The LiDAR spins around and collects over 1.5 million data points per second. By adding this tool to digitize the description of the environment in real time, Google was able to compute a dynamic, three-dimensional model through which the car drives. The addition of the LiDAR enabled engineers to deepen their knowledge of the driving environment to model pedestrians, cyclists, police officers, dogs, children and the millions and billions of situations and objects – living and otherwise — within it.

As driving becomes more computable, the change not only affects the sources of profit and power in the auto industry and related industries; it shifts the traditional definitions and buying behaviors of the entire sector and parallel sectors it touches, including insurance, repair shops, road construction firms, toll systems, parking, municipalities earning revenue from traffic tickets, public transportation, taxi and ride-sharing services and others. The folks at GM talk about having a market in transportation demand dynamically matched to transportation supply – including everything from owned automobiles, to shared bikes, to self-driving Uber vehicles.

How much do we have to “know” to render a task computable?

Different industries operate in wildly different contexts when it comes to how close we are to computing the underlying tasks that drive them. Think, predictive maintenance on a washing machine (high knowledge) vs. why cancer forms in any individual human body (low knowledge). The first task is largely computable; the second is not even close.

My friend Roger Bohn defined seven stages in his iconic knowledge framework. For the context of computability, we care about only three: Description, Correlation and Causation. The ability to describe a task is the baseline requirement to begin rendering it computable. When knowledge has deepened to the point of understanding correlation, we know enough about a task to understand the likely elements involved or affected. When knowledge has progressed to causation, we understand fully how it works.

See also: Will COVID-19 Be Digital Tipping Point?

In the context of the self-driving car, the LiDAR completed the necessary description. That allowed companies like Google, GM and Ford (not to leave out Tesla, which is following a parallel path with a different technological approach) to build fully working models and put them first on test tracks and then on real roads to run them through driving scenarios and gather data to analyze and deepen what we know. As Ray Kurzweil has pointed out, technology and knowledge have a positive feedback loop, so once a new technology is operating, learning and change happens faster. We see that with autonomous vehicles – the entire fleet learns together. The self-driving car is on the fast track now between description and correlation.

Computability Changes the Relationships Between Humans and Machines

Remembering that the bionic era is about a new mix of humans and machines, let’s explore how computability changes that blend. The vast digitization of the world is helping to simultaneously apply known techniques in new ways and find new knowledge to progress our understanding. For example, my dear friend John Henderson once tagged every single asset in the South Shore hospital – doctors, patients, ultrasound machines, etc. This created a digital library of all assets and their status. (In the language of the law of computability, it increased the level of digitization of the phenomenon.) With this new level of digitization in hand, his team was then able to apply existing operations research knowledge on scheduling, queuing theory, etc., to optimize the use of those assets – increasing throughput of the operating suites by 25%, which is a huge indicator of increased operational efficiency. In the language of the LoC, increased digitization unlocked the power of the knowledge of the phenomenon of interest.

Some pundits have said, if you can write down the function of your job, step by step, then it can be automated. I think that’s only partially right. For example, you can write down the task of caring for an Alzheimer’s patient with a step-by-step process, but we cannot yet render it computable, for many reasons. The robots cannot handle the complex and dynamic environment of person-to-person care. The roles of emotion, empathy and human understanding have a massive impact on wellness of patients, and we don’t yet know how and when humans might “feel” the same way about machines. So, that task is far from being computable – even though the programmatic articulation of steps can be done.

Using the Law of Computability: It’s all linking up or discovery

The twin challenges of any business are to understand how to digitize enough to use existing knowledge to create value, and how to create new, practical knowledge faster. For example, in a recent hurricane in Puerto Rico, a PwC team attached low-power WiFi sensors on tanks of diesel fuel running the generators powering the pumps that drove the water supply. This simple digitization enabled a whole new level of performance and confidence in the emergency water system. In a different example, Climate Corp., which sells crop insurance, used a combination of satellite, sensor and publicly available weather data to create a more accurate growth model for corn and other crops. They are so confident in their level of knowledge of the phenomenon of interest that they pay claims based on their model, without ever visiting the affected field. Again, the computability of the tasks changes the very nature and economics of the firm’s operations.

These questions apply not only to manufacturing sectors or service industries, like retail insurance, already far along their computability evolution, but also to knowledge-intensive industries that run on specialized human skills that many believe – sometimes falsely – are not imminently computable. Consider how this applies in digital retail. Before the digital age, customized shopping recommendations were so labor-intensive that they were only really provided by luxury brands offering dedicated, concierge-like services. Today, the data trail of browsing histories and digitally captured transaction details allow almost any digitally enabled retailer to develop a profile of a customer’s buying behavior, payment methods, shipping locations, etc. Amazon is the undisputed leader in this space in the West because the platform it has built for selling everything from books to vitamin supplements has allowed it to be both high scale and high scope — in other words, Amazon knows both a lot about you and a lot of different facts. It took many years and many billions of dollars spent to capture customers, build its business and technological platforms and develop robust analytics capabilities — in short, Amazon has higher fixed costs for making recommendations than, say, Walmart. But the marginal cost of making any given shopping recommendation is now at or near zero. And the transparent volume of customers and customer opinions on an infinite range of products drives still more traffic to its properties, as customers use Amazon not only to shop but to define their consideration set.

As this example also makes clear, the likely path for many knowledge-intensive industries is that the companies that create a dominant platform will thrive, and others will either barely hang on, or go out of business. The dominant platform(s) will have a more capital-intensive base, but excellent marginal and total economics, which will give them the capital necessary to continue to improve their technology to expand the distance between them and their next-closest competitor. They will also be able to skim the market on talent because technical expertise gravitates toward the leader.

How soon will computability change industry dynamics?

The law of computability helps answer what will be computable. The question of when, however, hinges not just on computability but on the competitive dynamics of a given industry and its sources of economic value. Technological progress is a dance between the possibilities of science and engineering, and the ambition of individual actors within businesses and government. Without the shock that came on Oct. 4, 1957, when the Soviet Union put Sputnik, the first human-made object, into orbit around the earth, John F. Kennedy would never have committed America to the moon project. There’s a Sputnik moment on computability coming in every company’s future. If one of the lead companies in an industry dives in and creates a solution, others will follow. The critical question for executives is, can you afford to be second?

See also: How Machine Learning Halts Data Breaches

Technologies are often over hyped early and under-appreciated later. When the iPhone was introduced on June 29, 2007, few people would have predicted the complete reconfiguration of where consumers spend their time, how people communicate and where people shop. In only 10 years, the entire consumer experience for billions of people radically shifted.

Some firms want to lead this revolution so they can be on the right end of the economic power curve that computability can deliver. Some, like Goldman Sachs, are already aiming to compute at least 10% of what their well-paid staff does today. GE is building digital twins of its industrial machines because it wants to drive productivity and gain market power. Others will need to respond if these business-to-business leaders have the same market power that the consumer companies like Facebook, Amazon, Alibaba and others have had in the consumer market. Those firms that can combine knowledge of their tasks and industries with deep digitization can lead the way in computability – and thereby garner competitive advantage that will be hard to overcome.

John Sviokla

Profile picture for user JohnSviokla

John Sviokla

Dr. John Sviokla has almost 30 years of experience researching, writing and speaking about digital transformation — making it a reality in companies large and small. He has over 100 publications in many journals, including Sloan Management Review, WSJ and the Financial Times.


Read More