Tag Archives: Amazon Web Services

How Tech Created a New Industrial Model

With a connected device for every acre of inhabitable land, we are starting to remake design, manufacturing, sales. Really, everything.

With little fanfare, something amazing happened: Wherever you go, you are close to an unimaginable amount of computing power. Tech writers use the line “this changes everything” too much, so let’s just say that it’s hard to say what this won’t change.

It happened fast. According to Cisco Systems, in 2016 there were 16.3 billion connections to the internet around the globe. That number, a near doubling in just four years, works out to 650 connections for every square mile of Earth’s inhabitable land, or roughly one every acre, everywhere. Cisco figures the connections will grow another 60% by 2020.

Instead of touching a relatively simple computer, a connected smartphone, laptop, car or sensor in some way touches a big cloud computing system. These include Amazon Web Services, Microsoft Azure or my employer, Google (which I joined from the New York Times earlier this year to write about cloud computing).

Over the decade since they started coming online, these big public clouds have moved from selling storage, network and computing at commodity prices to also offering higher-value applications. They host artificial intelligence software for companies that could never build their own and enable large-scale software development and management systems, such as Docker and Kubernetes. From anywhere, it’s also possible to reach and maintain the software on millions of devices at once.

For consumers, the new model isn’t too visible. They see an app update or a real-time map that shows traffic congestion based on reports from other phones. They might see a change in the way a thermostat heats a house, or a new layout on an auto dashboard. The new model doesn’t upend life.

For companies, though, there is an entirely new information loop, gathering and analyzing data and deploying its learning at increasing scale and sophistication.

Sometimes the information flows in one direction, from a sensor in the Internet of Things. More often, there is an interactive exchange: Connected devices at the edge of the system send information upstream, where it is merged in clouds with more data and analyzed. The results may be used for over-the-air software upgrades that substantially change the edge device. The process repeats, with businesses adjusting based on insights.

See also: ‘Core in the Cloud’ Reaches Tipping Point  

This cloud-based loop amounts to a new industrial model, according to Andrew McAfee, a professor at M.I.T. and, with Eric Brynjolfsson, the coauthor of “Machine, Platform, Crowd,” a new book on the rise of artificial intelligence. AI is an increasingly important part of the analysis. Seeing the dynamic as simply more computers in the world, McAfee says, is making the same kind of mistake that industrialists made with the first electric motors.

“They thought an electric engine was more efficient but basically like a steam engine,” he says. “Then they put smaller engines around and created conveyor belts, overhead cranes — they rethought what a factory was about, what the new routines were. Eventually, it didn’t matter what other strengths you had, you couldn’t compete if you didn’t figure that out.”

The new model is already changing how new companies operate. Startups like Snap, Spotify or Uber create business models that assume high levels of connectivity, data ingestion and analysis — a combination of tools at hand from a single source, rather than discrete functions. They assume their product will change rapidly in look, feel and function, based on new data.

The same dynamic is happening in industrial businesses that previously didn’t need lots of software.

Take Carbon, a Redwood City, CA maker of industrial 3D printers. More than 100 of its cloud-connected products are with customers, making resin-based items for sneakers, helmets and cloud computing parts, among other things.

Rather than sell machines, Carbon offers them like subscriptions. That way, it can observe what all of its machines are doing under different uses, derive conclusions from all of them on a continuous basis and upgrade the printers with monthly software downloads. A screen in the company’s front lobby shows total consumption of resins being collected on AWS, the basis for Carbon’s collective learning.

“The same way Google gets information to make searches better, we get millions of data points a day from what our machines are doing,” says Joe DeSimone, Carbon’s founder and CEO. “We can see what one industry does with the machine and share that with another.”

One recent improvement involved changing the mix of oxygen in a Carbon printer’s manufacturing chamber. That improved drying time by 20%. Building sneakers for Adidas, Carbon was able to design and manufacture 50 prototype shoes faster than it used to take to do half a dozen test models. It manufactures novel designs that were previously theoretical.

The cloud-based business dynamic raises a number of novel questions. If using a product is now also a form of programming a producer’s system, should a company’s avid data contributions be rewarded?

For Wall Street, which is the more interesting number: the revenue from sales of a product, or how much data is the company deriving from the product a month later?

Which matters more to a company, a data point about someone’s location, or its context with things like time and surroundings? Which is better: more data everywhere, or high-quality and reliable information on just a few things?

Moreover, products are now designed to create not just a type of experience but a type of data-gathering interaction. A Tesla’s door handles emerge as you approach it carrying a key. An iPhone or a Pixel phone comes out of its box fully charged. Google’s search page is a box awaiting your query. In every case, the object is yearning for you to learn from it immediately, welcoming its owner to interact, so it can begin to gather data and personalize itself. “Design for interaction” may become a new specialization.

 The cloud-based industrial model puts information-seeking responsive software closer to the center of general business processes. In this regard, the tradition of creating workflows is likely to change again.

See also: Strategist’s Guide to Artificial Intelligence  

A traditional organizational chart resembled a factory, assembling tasks into higher functions. Twenty-five years ago, client-server networks enabled easier information sharing, eliminating layers of middle management and encouraging open-plan offices. As naming data domains and rapidly interacting with new insights move to the center of corporate life, new management theories will doubtless arise as well.

“Clouds already interpenetrate everything,” says Tim O’Reilly, a noted technology publisher and author. “We’ll take for granted computation all around us, and our things talking with us. There is a coming generation of the workforce that is going to learn how we apply it.”

3 Reasons Insurance Is Changed Forever

We are entering a new era for global insurers, one where business interruption claims are no longer confined to a limited geography but can simultaneously have an impact on seemingly disconnected insureds globally. This creates new forms of systemic risks that could threaten the solvency of major insurers if they do not understand the silent and affirmative cyber risks inherent in their portfolios.

On Friday, Oct. 21, a distributed denial of service attack (DDoS) rendered a large number of the world’s most popular websites — including Twitter, Amazon, Netflix and GitHub — inaccessible to many users. The internet outage conscripted vulnerable Internet of Things (IoT) devices such as routers, DVRs and CCTV cameras to overwhelm DNS provider Dyn, effectively hampering internet users’ ability to access websites across Europe and North America. The attack was carried out using an IoT botnet called Mirai, which works by continuously scanning for IoT devices with factory default user names and passwords.

The Dyn attack highlights three fundamental developments that have changed the nature of aggregated business interruption for the commercial insurance industry:

1. The proliferation of systemically important vendors

The emergence of systemically important vendors can cause simultaneous business interruption to large portions of the global economy.

The insurance industry is aware about the potential aggregation risk in cloud computing services, such as Amazon Web Services (AWS) and Microsoft Azure. Cloud computing providers create potential for aggregation risk; however, given the layers of security, redundancy and the 38 global availability zones built into AWS, it is not necessarily the easiest target for adversaries to cause a catastrophic event for insurers.

See also: Who Will Make the IoT Safe?

There are potentially several hundred systemically important vendors that could be susceptible to concurrent and substantial business interruption. This includes at least eight DNS providers that service over 50,000 websites — and some of these vendors may not have the kind of security that exists within providers like AWS.

2. Insecurity in the Internet of Things (IoT) built into all aspects of the global economy

The emergence of IoT with applications as diverse as consumer devices, manufacturing sensors, health monitoring and connected vehicles is another key development. Estimates state that anywhere from 20 to 200 billion everyday objects will be connected to the internet by 2020. Security is often not being built into the design of these products with the rush to get them to market.

Symantec’s research on IoT security has shown the state of IoT security is poor:

  • 19% of all tested mobile apps used to control IoT devices did not use Secure Socket Layer (SSL) connections to the cloud.
  • 40% of tested devices allowed unauthorized access to back-end systems.
  • 50% of tested devices did not provide encrypted firmware updates — if updates were provided at all.
  • IoT devices usually had weak password hygiene, including factory default passwords; for example, adversaries use default credentials for the Raspberry Pi devices to compromise devices.

The Dyn attack compromised less than 1% of IoT devices. By some accounts, millions of vulnerable IoT devices were used in a market with approximately 10 billion devices. XiongMai Technologies, the Chinese electronics firm behind many of the webcams compromised in the attack, has issued a recall for many of its devices.

Outages like these are just the beginning.

Shankar Somasundaram, senior director, Internet of Things at Symantec, expects more of these attacks in the near future.

3. Catastrophic losses because of cyber risks are not independent, unlike natural catastrophes 

A core tenant of natural catastrophe modeling is that the aggregation events are largely independent. An earthquake in Japan does not increase the likelihood of an earthquake in California.

In the cyber world consisting of active adversaries, this does not hold true for two reasons (which require an understanding of threat actors).

First, an attack on an organization like Dyn will often lead to copycat attacks from disparate non-state groups. Symantec maintains a network of honeypots, which collects IoT malware samples. A distribution of attacks is below:

  • 34% from China
  • 26% from the U.S.
  • 9% from Russia
  • 6% from Germany
  • 5% from the Netherland
  • 5% from the Ukraine
  • Long tail of adversaries from Vietnam, the UK, France and South Korea

Groups such as New World Hacking often replicate attacks. Understanding where they are targeting their time and attention and whether there are attempts to replicate attacks is important for an insurer to respond to a one-off event.

See also: Why More Attacks Via IoT Are Inevitable  

A key aspect to consider in cyber modeling is intelligence about state-based threat actors. It is important to understand both the capabilities and the motivations of threat actors when assessing the frequency of catastrophic scenarios. Scenarios where we see a greater propensity for catastrophic cyber attacks are also scenarios where those state actors are likely attempting multiple attacks. Although insurers may wish to seek refuge in the act of war definitions that exist in other insurance lines, cyber attack attribution to state-based actors is difficult — and, in some cases, not possible.

What does this mean for global insurers?

The Dyn attack illustrates that insurers need to pursue new approaches to understanding and modeling cyber risk. Recommendations for insurers are below:

  1. Recognize that cyber as a peril expands far beyond cyber data and liability from a data breach and could be embedded in almost all major commercial insurance lines.
  2. Develop and hire cyber security expertise internally — especially in the group risk function — to understand the implications of cyber perils across all lines.
  3. Understand whether basic IoT security hygiene is being undertaken when underwriting companies using IoT devices.
  4. Partner with institutions that can provide a multi-disciplinary approach to modeling cyber security for insurers, including:
  • Hard data (for example, attack trends across the kill chain by industry);
  • Intelligence (such as active adversary monitoring); and
  • Expertise (in new IoT technologies and key points of failure).

Symantec is partnering globally with leading insurers to develop probabilistic, scenario-based modeling to help understand cyber risks inherent in standalone cyber policies, as well as cyber as a peril across all lines of insurance. The Internet of Things opens up tremendous new opportunities for consumers and businesses, but understanding the financial risks inherent in this development will require deep collaboration between the cyber security and cyber insurance industries.

Amazon Prepares for Zombie Apocalypse

Amazon is revered for being a very forward-looking business. So, with the inclusion of a “zombie apocalypse” clause in its latest terms of service, should we all be worried?

It’s right there in paragraph 57.10 of the company’s Lumberyard games development engine, a 3D game design program for use with Amazon Web Services (AWS):

AWS-Zombie-Clause

In this paragraph, Amazon notes the program is not intended for use with “life-critical or safety-critical systems,” except if the Centers for Disease Control and Prevention (CDC) declares the presence of a “widespread viral infection transmitted by bites or contact with bodily fluids that cause human corpses to reanimate and seek to consume living human flesh, blood, brain or nerve tissue and is likely to result in the fall of organized civilization.”

Translation: If the zombie apocalypse comes to pass, you can use Lumberyard for whatever the heck you want.

All right, so Amazon is injecting some humor in what are typically long, boring and dense terms of service that, let’s face it, no one ever reads—for any company.

But the fact that the flesh-eating undead can be referenced in a document like this with almost nobody noticing speaks to a larger and more serious issue: Disclosure documents like Amazon’s are an awful way to communicate important information to your customer.

Companies bury important details in opaque disclosures they count on no one reading. Examples abound—coverage exclusions for your insurance, service fees for your bank account, cancellation fees for your gym membership, price increases for your cable TV package or conflicts of interest for your financial adviser. Organizations hide behind these disclosure documents and point to them as evidence that anything important is, indeed, revealed to the customer.

But here’s the key these companies are missing: Disclosure is not a proxy for transparency. Indeed, as practiced these days (with pages of unintelligible fine print), disclosure is the antithesis of transparency. So let’s start referring to disclosure documents for what they really are: a tool businesses use to convey information they don’t want anyone to see. Until more companies reject such disingenuous practices (like Southwest Airlines has done with its Transfarency strategy), consumer trust in businesses will continue to erode.

Do you want to strengthen your customer relationships? Go beyond the legally required disclosures and start communicating with people in a clear and forthright way. That sends a signal to your customers that you’re advocating for them and helping them avoid unpleasant surprises, whether that’s in the form of excessive fees, conflicts of interest or, even, the zombie-induced fall of organized civilization.

That’s the kind of advocacy from which loyal brand advocates are born.

This article first appeared at Watermark Consulting.

Ceded Reinsurance Needs SaaS Model

Ceded reinsurance management is still a technology backwater at insurers that manage their reinsurance policies and claims with spreadsheet software. These manual methods are error-prone, slow and labor-intensive. Regulatory compliance is difficult, and legitimate claims can slip through.

Insurers recognize they need a better solution, and there’s progress. While quite a few insurers have implemented a ceded reinsurance system in the last few years, many more are planning to install their first system sometime soon. They want to have software that lets them manage complex facultative reinsurance and treaties and the corresponding policies and claims efficiently in one place.

Insurers looking to upgrade any kind of system have better choices today than ever about how and where to implement it. While licensing the software and running it on-premises is still an option, virtually every insurer is considering putting new systems on the cloud in some way. Nearly every insurer expects vendors to include a SaaS or hosted option in their RFPs.

That’s not surprising. A 2014 Ovum whitepaper said 52% of insurers it surveyed are currently earmarking 20% to 39% of new IT spending on SAAS, while 21% are spending 40% to 59%.

The definition of SaaS is not set in stone, but let’s try for a basic understanding. SaaS normally means that the insurer pays the vendor a monthly fee that covers everything—the use of the software, maintenance, upgrades and support. The software is hosted more often at a secure cloud provider such as Amazon Web Services that offers a sound service level agreement.

A ceded reinsurance system is an especially good candidate for a SaaS or a hybrid solution (more on that later). It’s an opportunity for insurers and IT professionals to get comfortable with SaaS on a smaller scale before putting a core system such as a policy administration system in the cloud.

SaaS is attractive for several key reasons. One is that it can save money. Instead of paying a large upfront fee for a perpetual software license, the insurer just pays a monthly, all-inclusive “rental” fee. The software vendor and the cloud-hosting vendor provide both the application and underlying software (such as Oracle or WebSphere) and servers. All the insurer needs is a solid Internet connection. And if your building is hit with a flood or earthquake and has to close, business won’t stop, because users can access the system from almost anywhere.

Having the experts run, maintain and upgrade the software is another big advantage. Instead of having internal IT people apply patches and updates, the vendor—which knows its own software better than anyone else—keeps the application going 24/7. Because it is doing the same thing for many customers, there are economies of scale.

Lower upfront costs and the ability to outsource maintenance to experts mean that even small and medium-sized insurers can afford a state-of-the-art system that might be out of reach otherwise. But even big insurers that have the money and funds to buy and staff a system can still find SaaS to be a compelling option. Whether the insurer is big, small or mid-sized, SaaS offers a platform that may never become obsolete.

Additionally, going the SaaS route can get your system up and running faster, as you won’t need to buy hardware and install the system on your servers. How long it will take depends on the amount of customization required and on the data requirements.

Scalability is another plus. When the business grows, the customer can just adjust the monthly fee instead of having to buy more hardware.

A 2014 Gartner survey of organizations in 10 countries said most are deploying SaaS for mission-critical functions. The traditional on-premises software model is expected to shrink from 34% today to 18% by 2017, Gartner said.

While these are powerful advantages, there are some real or at least perceived disadvantages with SaaS. Probably the biggest barrier is willingness to have a third party store data. A ceded system uses nearly all data from the insurance company, sometimes over many underwriting years, and executives must be comfortable that their company’s data is 100% safe when it’s stored elsewhere. That can be a big leap of faith that some companies aren’t ready to make.

A hybrid solution can be a good way around that. More common in Europe, hybrid solutions are starting to catch on in North America. With this model, the software and data reside at the insurer or reinsurer, which also owns the license. The difference is that the vendor connects to the insurer’s environment to monitor, optimize and maintain the system. As with SaaS, the insurer’s IT department has little involvement with continuing operations. All that is work is outsourced.

How much access does the vendor have to the insurer’s data and systems under a hybrid solution? There are various options, depending on the insurer’s comfort level.

What’s the right solution for a ceded reinsurance system to your company? Each company is unique, and the best answer depends on many factors. But whether you go the on-premises route, choose SaaS or use a hybrid solution, you’ll get a modern system that handles reinsurance efficiently and effectively. Your company is going to benefit greatly.