Tag Archives: tim cook

FBI

Apple v. FBI: Inevitable Conflicts on Tech

The battle between the FBI and Apple over the unlocking of a terrorist’s iPhone will likely require Congress to create legislation. That’s because there really aren’t any existing laws that encompass technologies such as these. The battle is between security and privacy, with Silicon Valley fighting for privacy. The debates in Congress will be ugly, uninformed and emotional. Lawmakers won’t know which side to pick and will flip flop between what lobbyists ask and the public’s fear du jour. Because there is no consensus on what is right or wrong, any decision legislators make today will likely be changed tomorrow.

This fight is a prelude of things to come, not only with encryption technologies but everything from artificial intelligence to drones, robotics and synthetic biology. Technology is moving faster than our ability to understand it, and there is no consensus on what is ethical. It isn’t just that the lawmakers are not well-informed, the originators of the technologies themselves don’t understand the full ramifications of what they are creating. They may take strong positions today based on their emotions and financial interests, but, as they learn more, they, too, will change their views.

Imagine if there was a terror attack in Silicon Valley — at the headquarters of Facebook or Apple. Do you think that Tim Cook or Mark Zuckerberg would continue to put privacy ahead of national security?

It takes decades, sometimes centuries, to reach the type of consensus that is needed to enact the far-reaching legislation that Congress will have to consider. Laws are essentially codified ethics, a consensus that is reached by society on what is right and wrong. This happens only after people understand the issues and have seen the pros and cons.

Consider our laws on privacy. These date back to the late 1800s, when newspapers started publishing gossip. They wrote a series of intrusive stories about Boston lawyer Samuel Warren and his family. This led his law partner, future U.S. Supreme Court Justice Louis Brandeis, to write a Harvard Law Review article, “The Right of Privacy,” which argued for the right to be left alone. This essay laid the foundation of American privacy law, which evolved over 200 years. It also took centuries to create today’s copyright laws, intangible property rights and contract law. All of these followed the development of technologies such as the printing press and steam engine.

Today, technology is progressing on an exponential curve; advances that would take decades now happen in years, sometimes months. Consider that the first iPhone was released in June 2007. It was little more than an iPod with an embedded cell phone. This has evolved into a device that captures our deepest personal secrets, keeps track of our lifestyles and habits and is becoming our health coach and mentor. It was inconceivable just five years ago that there could be such debates about unlocking this device.

A greater privacy risk than the lock on the iPhone are the cameras and sensors that are being placed everywhere. There are cameras on our roads, in public areas and malls and in office buildings. One company just announced that it is partnering with AT&T to track people’s travel patterns and behaviors through their mobile phones so that its billboards can display personalized ads. Even billboards will also include cameras to watch the expressions of passersby.

Cameras often record everything that is happening. Soon there will be cameras looking down at us from drones and in privately owned microsatellites. Our TVs, household appliances and self-driving cars will be watching us. The cars will also keep logs of where we have been and make it possible to piece together who we have met and what we have done — just as our smartphones can already do. These technologies have major security risks and are largely unregulated. Each has its nuances and will require different policy considerations.

The next technology that will surprise, shock and scare the public is gene editing.  CRISPR–Cas9 is a system for engineering genomes that was simultaneously developed by teams of scientists at different universities. This technology, which has become inexpensive enough for labs all over the world to use, allows the editing of genomes—the basic building blocks of life. It holds the promise of providing cures for genetic diseases, creating drought-resistant and high-yield plants and producing new sources of fuel. It can also be used to “edit” the genomes of animals and human beings.

China is leading the way in creating commercial applications for CRISPR, having edited goats, sheep, pigs, monkeys and dogs. It has given them larger muscles and more fur and meat and altered their shapes and sizes. Scientists demonstrated that these traits can be passed to future generations, creating a new species. China sees this editing as a way to feed its billion people and provide it a global advantage.

China has also made progress in creating designer babies. In April 2015, scientists in China revealed that they had tried using CRISPR to edit the genomes of human embryos. Although these embryos could not develop to term, viable embryos could one day be engineered to cure disease or provide desirable traits. The risk is that geneticists with good intentions could mistakenly engineer changes in DNA that generate dangerous mutations and cause painful deaths.

In December 2015, an international group of scientists gathered at the National Academy of Sciences to call for a moratorium on making inheritable changes to the human genome until there is a “broad societal consensus about the appropriateness” of any proposed change. But then, this February the British government announced that it has approved experiments by scientists at Francis Crick Institute to treat certain cases of infertility. I have little doubt that these scientists will not cross any ethical lines. But is there anything to stop governments themselves from surreptitiously working to develop a race of superhuman soldiers?

The creators of these technologies usually don’t understand the long-term ramifications of what they are creating, and, when they do, it is often too late, as was the case with CRISPR. One of its inventors, Jennifer Doudna, wrote a touching essay in the December issue of Nature. “I was regularly lying awake at night wondering whether I could justifiably stay out of an ethical storm that was brewing around a technology I had helped to create,” she lamented. She has called for human genome editing to “be on hold pending a broader societal discussion of the scientific and ethical issues surrounding such use.”

A technology that is far from being a threat is artificial intelligence. Yet it is stirring deep fears. AI is, today, nothing more than brute-force computing, with superfast computers crunching massive amounts of data. Yet it is advancing so fast that tech luminaries such as Elon Musk, Bill Gates and Stephen Hawking worry it will evolve beyond human capability and become an existential threat to mankind. Others fear that it will create wholesale unemployment. Scientists are trying to come to a consensus about how AI can be used in a benevolent way, but, as with CRISPR, how can you regulate something that anyone, anywhere, can develop?

And soon, we will have robots that serve us and become our companions. These, too, will watch everything that we do and raise new legal and ethical questions. They will evolve to the point that they seem human. What happens, then, when a robot asks for the right to vote or kills a human in self-defense?

Thomas Jefferson said in 1816, “Laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths disclosed, and manners and opinions change with the change of circumstances, institutions must advance also, and keep pace with the times.” But how can our policy makers and institutions keep up with the advances when the originators of the technologies themselves can’t?

There is no answer to this question.

What the Apple Watch Says About Innovation

Now that the dust has settled on the long-anticipated unveiling of the Apple Watch, a major obstacle to its success is coming into view: the iPhone.

The Apple Watch has been the subject of breathless anticipation for years because, as Tim Cook said at its introduction, it represents “the next chapter in Apple’s story.” Conceived three years ago, shortly after Steve Jobs’ passing, the Watch is the embodiment of multiple dramatic arcs and aspirations.

It is the first major product developed under Tim Cook and Jony Ive outside of Jobs’ shadow—and thus has huge personal and legacy implications for both men.

The Watch is also Apple’s attempt to catalyze and dominate the wearables category. Given the intense competition in the smartphone market and the widespread view that new killer products, platforms and ecosystems will emerge somewhere at the intersection of the Internet of Things and wearable computing, the Watch is central to Apple’s post-iPhone strategy.

It might seem that the iPhone should be the Apple Watch’s greatest asset. Apple is positioning the Watch as a jaw-dropping, must-have peripheral to the iPhone. Millions of iPhone-toting Apple fans are sure to queue up upon the Watch’s 2015 launch to buy it. But do not mistake early adopters for market validation. For billions of other potential customers, the Watch’s close linkage and tethering to the iPhone could be a fundamental weakness.

In the short term, Apple must convince existing customers that they need a Watch in addition to their iPhone. Apple, however, has yet to offer a convincing case for this.

Long-rumored groundbreaking health apps built on Watch-mounted sensors have not materialized—disappointing many healthcare watchers (including me). That leaves Apple competing against more narrowly focused wearable devices like the Fitbit and Pebble—but at multiple times the price and fractions of the battery life.

Apple is also touting Apple Pay as a killer app that will attract consumers to the Watch. But, while Apple Pay is an intriguing service-oriented strategy for Apple, there is no need for consumers to buy an Apple Watch to use it. Apple Pay will work fine with just the iPhone.

For now, it seems that Apple has higher hopes for the Watch as a fashion accessory than as a category-defining killer app. But even that highbrow aspiration has ample skeptics who question the Watch’s fashion chops and business potential.

In the long term, when and if compelling apps emerge for the Watch, Apple will have to convince Watch enthusiasts that they need an iPhone in addition to the Watch.

This might not seem like a limiting factor given that there are more than 300 million active iPhone users. But imagine if the iPhone were just a peripheral to the Mac, thereby limiting its addressable market to Mac owners. Or imagine if the iPhone had to be tethered to the iPod. Do not such scenarios, in retrospect, sound implausibly shortsighted?

Both the Mac and the iPod were great products with loyal followings at the iPhone’s introduction. Apple, however, did not limit the iPhone to its predecessors’ market niches. As shown in Figure 1, the result was a blockbuster that lifted Apple far beyond those earlier products. The iPhone has grown to represent more than half of Apple’s revenues and perhaps even more of its profits.

chunkagraph1

Figure 1 — Apple Device Sales

Now the iPhone has a loyal following but a small share of the smartphone market. Will Tim Cook limit the Apple Watch’s success to iPhone owners, or will Cook free it to dominate the potentially larger wearable devices space?

Freeing the Watch is a strategic imperative.

History tells us that market-leading technology products like the iPhone inevitably fade. The companies that depend on them must innovate into the succeeding categories or fade as well. Kodak, Polaroid, IBM, DEC, Nokia, Motorola, Blackberry, Intel, Sony, Dell and Microsoft are among those fading or faded companies.

All of those other companies underutilized disruptive advances in information technology for (at best) incremental enhancements to their dominant products. By doing so, they missed out on new killer products, business models and industries that coalesced around the new platforms enabled by those technology advances.

Thus, Kodak wasted decades trying to deploy digital photography (which it invented) as an enhancer to its dominant film-driven businesses. Microsoft was slow to the web and the cloud and killed its early e-reader and tablet devices because of internecine struggles over how those new categories related to its Windows and Office businesses. The list goes on: IBM did not lead in minicomputers. DEC and every other leading minicomputer maker missed out on personal computers. Motorola and Nokia were killed by smartphones, and Blackberry is near death.

Limiting the Watch to a peripheral role in the iPhone-centric ecosystem would repeat the same mistake made by those earlier market-leading technology companies.

That’s not to say there is not a lot of money to be made in the defend-the-cash-cow approach. Just look at the more than $650 billion in revenue and nearly $250 billion in earnings that Steve Ballmer delivered in his tenure as Microsoft CEO. Ballmer achieved those impressive numbers by defending and milking Microsoft’s dominant Office and Windows products. Ballmer, Microsoft and its investors missed out, however, on the market value created by Google, Apple, Facebook, Twitter and others that capitalized on search, big data, cloud computing, mobile devices and social media. Ballmer’s inability to grow beyond the core products that he inherited stagnated Microsoft’s market value for a decade.

Likewise, Tim Cook could nurse Apple’s iPhone-driven revenue stream for a long time. I doubt, however, that Tim Cook would be satisfied with a value-creation legacy comparable to Steve Ballmer’s.

It is too early to dismiss the Apple Watch’s potential to transcend the iPhone. We’ll get a measure of Apple’s foresight when it releases the software development kit (SDK) for the Watch. That will show how fundamentally tethered the Watch is to the iPhone and whether Apple has laid the groundwork for the Watch to be standalone at some point.

The real gut check for Tim Cook is further out in time, when technology and creativity enables wearable devices like the Watch to not only stand alone from the iPhone but also to replace it.

Will Tim Cook allow the Watch to cannibalize iPhone sales—as Apple previously allowed the iPhone to eat away at the iPod and risked the iPad’s doing the same to the Mac? Or will Apple stagnate as competitors and new entrants out-innovate it? Will Apple fade away as the riches from new killer apps, devices, ecosystems and business models that coalesce around emerging wearables-centric platforms flow to others?