Tag Archives: microsoft azure

How Tech Created a New Industrial Model

With a connected device for every acre of inhabitable land, we are starting to remake design, manufacturing, sales. Really, everything.

With little fanfare, something amazing happened: Wherever you go, you are close to an unimaginable amount of computing power. Tech writers use the line “this changes everything” too much, so let’s just say that it’s hard to say what this won’t change.

It happened fast. According to Cisco Systems, in 2016 there were 16.3 billion connections to the internet around the globe. That number, a near doubling in just four years, works out to 650 connections for every square mile of Earth’s inhabitable land, or roughly one every acre, everywhere. Cisco figures the connections will grow another 60% by 2020.

Instead of touching a relatively simple computer, a connected smartphone, laptop, car or sensor in some way touches a big cloud computing system. These include Amazon Web Services, Microsoft Azure or my employer, Google (which I joined from the New York Times earlier this year to write about cloud computing).

Over the decade since they started coming online, these big public clouds have moved from selling storage, network and computing at commodity prices to also offering higher-value applications. They host artificial intelligence software for companies that could never build their own and enable large-scale software development and management systems, such as Docker and Kubernetes. From anywhere, it’s also possible to reach and maintain the software on millions of devices at once.

For consumers, the new model isn’t too visible. They see an app update or a real-time map that shows traffic congestion based on reports from other phones. They might see a change in the way a thermostat heats a house, or a new layout on an auto dashboard. The new model doesn’t upend life.

For companies, though, there is an entirely new information loop, gathering and analyzing data and deploying its learning at increasing scale and sophistication.

Sometimes the information flows in one direction, from a sensor in the Internet of Things. More often, there is an interactive exchange: Connected devices at the edge of the system send information upstream, where it is merged in clouds with more data and analyzed. The results may be used for over-the-air software upgrades that substantially change the edge device. The process repeats, with businesses adjusting based on insights.

See also: ‘Core in the Cloud’ Reaches Tipping Point  

This cloud-based loop amounts to a new industrial model, according to Andrew McAfee, a professor at M.I.T. and, with Eric Brynjolfsson, the coauthor of “Machine, Platform, Crowd,” a new book on the rise of artificial intelligence. AI is an increasingly important part of the analysis. Seeing the dynamic as simply more computers in the world, McAfee says, is making the same kind of mistake that industrialists made with the first electric motors.

“They thought an electric engine was more efficient but basically like a steam engine,” he says. “Then they put smaller engines around and created conveyor belts, overhead cranes — they rethought what a factory was about, what the new routines were. Eventually, it didn’t matter what other strengths you had, you couldn’t compete if you didn’t figure that out.”

The new model is already changing how new companies operate. Startups like Snap, Spotify or Uber create business models that assume high levels of connectivity, data ingestion and analysis — a combination of tools at hand from a single source, rather than discrete functions. They assume their product will change rapidly in look, feel and function, based on new data.

The same dynamic is happening in industrial businesses that previously didn’t need lots of software.

Take Carbon, a Redwood City, CA maker of industrial 3D printers. More than 100 of its cloud-connected products are with customers, making resin-based items for sneakers, helmets and cloud computing parts, among other things.

Rather than sell machines, Carbon offers them like subscriptions. That way, it can observe what all of its machines are doing under different uses, derive conclusions from all of them on a continuous basis and upgrade the printers with monthly software downloads. A screen in the company’s front lobby shows total consumption of resins being collected on AWS, the basis for Carbon’s collective learning.

“The same way Google gets information to make searches better, we get millions of data points a day from what our machines are doing,” says Joe DeSimone, Carbon’s founder and CEO. “We can see what one industry does with the machine and share that with another.”

One recent improvement involved changing the mix of oxygen in a Carbon printer’s manufacturing chamber. That improved drying time by 20%. Building sneakers for Adidas, Carbon was able to design and manufacture 50 prototype shoes faster than it used to take to do half a dozen test models. It manufactures novel designs that were previously theoretical.

The cloud-based business dynamic raises a number of novel questions. If using a product is now also a form of programming a producer’s system, should a company’s avid data contributions be rewarded?

For Wall Street, which is the more interesting number: the revenue from sales of a product, or how much data is the company deriving from the product a month later?

Which matters more to a company, a data point about someone’s location, or its context with things like time and surroundings? Which is better: more data everywhere, or high-quality and reliable information on just a few things?

Moreover, products are now designed to create not just a type of experience but a type of data-gathering interaction. A Tesla’s door handles emerge as you approach it carrying a key. An iPhone or a Pixel phone comes out of its box fully charged. Google’s search page is a box awaiting your query. In every case, the object is yearning for you to learn from it immediately, welcoming its owner to interact, so it can begin to gather data and personalize itself. “Design for interaction” may become a new specialization.

 The cloud-based industrial model puts information-seeking responsive software closer to the center of general business processes. In this regard, the tradition of creating workflows is likely to change again.

See also: Strategist’s Guide to Artificial Intelligence  

A traditional organizational chart resembled a factory, assembling tasks into higher functions. Twenty-five years ago, client-server networks enabled easier information sharing, eliminating layers of middle management and encouraging open-plan offices. As naming data domains and rapidly interacting with new insights move to the center of corporate life, new management theories will doubtless arise as well.

“Clouds already interpenetrate everything,” says Tim O’Reilly, a noted technology publisher and author. “We’ll take for granted computation all around us, and our things talking with us. There is a coming generation of the workforce that is going to learn how we apply it.”

3 Reasons Insurance Is Changed Forever

We are entering a new era for global insurers, one where business interruption claims are no longer confined to a limited geography but can simultaneously have an impact on seemingly disconnected insureds globally. This creates new forms of systemic risks that could threaten the solvency of major insurers if they do not understand the silent and affirmative cyber risks inherent in their portfolios.

On Friday, Oct. 21, a distributed denial of service attack (DDoS) rendered a large number of the world’s most popular websites — including Twitter, Amazon, Netflix and GitHub — inaccessible to many users. The internet outage conscripted vulnerable Internet of Things (IoT) devices such as routers, DVRs and CCTV cameras to overwhelm DNS provider Dyn, effectively hampering internet users’ ability to access websites across Europe and North America. The attack was carried out using an IoT botnet called Mirai, which works by continuously scanning for IoT devices with factory default user names and passwords.

The Dyn attack highlights three fundamental developments that have changed the nature of aggregated business interruption for the commercial insurance industry:

1. The proliferation of systemically important vendors

The emergence of systemically important vendors can cause simultaneous business interruption to large portions of the global economy.

The insurance industry is aware about the potential aggregation risk in cloud computing services, such as Amazon Web Services (AWS) and Microsoft Azure. Cloud computing providers create potential for aggregation risk; however, given the layers of security, redundancy and the 38 global availability zones built into AWS, it is not necessarily the easiest target for adversaries to cause a catastrophic event for insurers.

See also: Who Will Make the IoT Safe?

There are potentially several hundred systemically important vendors that could be susceptible to concurrent and substantial business interruption. This includes at least eight DNS providers that service over 50,000 websites — and some of these vendors may not have the kind of security that exists within providers like AWS.

2. Insecurity in the Internet of Things (IoT) built into all aspects of the global economy

The emergence of IoT with applications as diverse as consumer devices, manufacturing sensors, health monitoring and connected vehicles is another key development. Estimates state that anywhere from 20 to 200 billion everyday objects will be connected to the internet by 2020. Security is often not being built into the design of these products with the rush to get them to market.

Symantec’s research on IoT security has shown the state of IoT security is poor:

  • 19% of all tested mobile apps used to control IoT devices did not use Secure Socket Layer (SSL) connections to the cloud.
  • 40% of tested devices allowed unauthorized access to back-end systems.
  • 50% of tested devices did not provide encrypted firmware updates — if updates were provided at all.
  • IoT devices usually had weak password hygiene, including factory default passwords; for example, adversaries use default credentials for the Raspberry Pi devices to compromise devices.

The Dyn attack compromised less than 1% of IoT devices. By some accounts, millions of vulnerable IoT devices were used in a market with approximately 10 billion devices. XiongMai Technologies, the Chinese electronics firm behind many of the webcams compromised in the attack, has issued a recall for many of its devices.

Outages like these are just the beginning.

Shankar Somasundaram, senior director, Internet of Things at Symantec, expects more of these attacks in the near future.

3. Catastrophic losses because of cyber risks are not independent, unlike natural catastrophes 

A core tenant of natural catastrophe modeling is that the aggregation events are largely independent. An earthquake in Japan does not increase the likelihood of an earthquake in California.

In the cyber world consisting of active adversaries, this does not hold true for two reasons (which require an understanding of threat actors).

First, an attack on an organization like Dyn will often lead to copycat attacks from disparate non-state groups. Symantec maintains a network of honeypots, which collects IoT malware samples. A distribution of attacks is below:

  • 34% from China
  • 26% from the U.S.
  • 9% from Russia
  • 6% from Germany
  • 5% from the Netherland
  • 5% from the Ukraine
  • Long tail of adversaries from Vietnam, the UK, France and South Korea

Groups such as New World Hacking often replicate attacks. Understanding where they are targeting their time and attention and whether there are attempts to replicate attacks is important for an insurer to respond to a one-off event.

See also: Why More Attacks Via IoT Are Inevitable  

A key aspect to consider in cyber modeling is intelligence about state-based threat actors. It is important to understand both the capabilities and the motivations of threat actors when assessing the frequency of catastrophic scenarios. Scenarios where we see a greater propensity for catastrophic cyber attacks are also scenarios where those state actors are likely attempting multiple attacks. Although insurers may wish to seek refuge in the act of war definitions that exist in other insurance lines, cyber attack attribution to state-based actors is difficult — and, in some cases, not possible.

What does this mean for global insurers?

The Dyn attack illustrates that insurers need to pursue new approaches to understanding and modeling cyber risk. Recommendations for insurers are below:

  1. Recognize that cyber as a peril expands far beyond cyber data and liability from a data breach and could be embedded in almost all major commercial insurance lines.
  2. Develop and hire cyber security expertise internally — especially in the group risk function — to understand the implications of cyber perils across all lines.
  3. Understand whether basic IoT security hygiene is being undertaken when underwriting companies using IoT devices.
  4. Partner with institutions that can provide a multi-disciplinary approach to modeling cyber security for insurers, including:
  • Hard data (for example, attack trends across the kill chain by industry);
  • Intelligence (such as active adversary monitoring); and
  • Expertise (in new IoT technologies and key points of failure).

Symantec is partnering globally with leading insurers to develop probabilistic, scenario-based modeling to help understand cyber risks inherent in standalone cyber policies, as well as cyber as a peril across all lines of insurance. The Internet of Things opens up tremendous new opportunities for consumers and businesses, but understanding the financial risks inherent in this development will require deep collaboration between the cyber security and cyber insurance industries.