Tag Archives: Stephen Hawking

The Brewing Crisis Over Jobs

Everyone has heard the old anecdote about the frog in a pot of water. If the temperature is raised slowly, the frog won’t react, eventually allowing itself to get boiled. That’s where we’re heading as a country when it comes to technological advances and the threat they pose to millions of jobs.

Seemingly every day, there are new stories in the media about artificial intelligence, data and robotics — and the jobs they threaten in retail, transportation, carrier transport and even the legal profession. Yet no one is jumping out of the pot.

Let’s be clear: This is not science fiction. In just recent days, there have been articles on Amazon’s automation ambitions, described by the New York Times as “putting traditional retail jobs in jeopardy,” and on the legal profession bracing for technology taking over some tasks once handled by lawyers.

As reported in Recode, a new study by the research firm PwC found that nearly four out of 10 jobs in the U.S. could be “vulnerable to replacement by robots in the next 15 years.” Many of those will be truckers, among the most common jobs in states across the country.

See also: Why Trump’s Travel Ban Hurts Innovation  

Yet when President Trump hosted truck drivers at the White House recently, he dedicated his remarks to the threat of healthcare without uttering a word about the advanced driverless semi fleets that will soon replace them. His Treasury Secretary Steven Mnuchin shockingly said in an interview last week that we’re “50 to 100 years” away from artificial intelligence threatening jobs.

It’s easy for sensationalist headlines about AI to dominate, like those about Elon Musk’s warning that it poses an existential threat. Yet the attention of people such as Musk, Bill Gates and Stephen Hawking should be a signal to Trump and Mnuchin that AI and related robotics and automation are moving at a far faster clip than they are acknowledging. It should be on the administration’s radar screen, and officials should be jumping out of the boiling water.

Solutions won’t come easy. Already some experts suggest a universal basic income will be necessary to offset the job losses. We also have to help our workforce make the transition. Educational institutions such as Miami-Dade College and Harvard University have introduced advanced programming courses that take students from zero to six programming languages on a fast track. More needs to be done. This should be the most innovative decade in human history, and it has to be if we’re going to avoid a Mad Max dystopia in favor of a Star Trek future.

Of course, there are those who say similar warnings were raised as technology revolutionized agriculture and other industries along the way. They might argue that then, as now, those advances led to more jobs. We would all welcome that and the potential these changes will represent for improving lives.

See also: Can Trump Make ‘the Cyber’ Secure?  

Technological advances could greatly reduce the cost of living, make housing more affordable and solve some of the biggest challenges whether in energy or long-term care, an issue painfully familiar to so many families. It may also help improve quality of life in the long term, as men and women gain greater flexibility to spend time with loved ones rather than dedicating 40 or more hours a week to working and so many others commuting.

In the near term, however, the job losses that are possible could inflict tremendous economic pain. We are far from where we need to be. That will continue to be the case until policymakers, educators and innovators come together to address the reality before us. We won’t solve this overnight, but we can’t afford to wait until it’s too late.

This was written by Vivek Wadhwa and Jeff Greene.

AI’s Promise Is Finally Upon Us

We have been hearing predictions for decades of a takeover of the world by artificial intelligence. In 1957, Herbert A. Simon predicted that within 10 years a digital computer would be the world’s chess champion. That didn’t happen until 1996. And despite Marvin Minsky’s 1970 prediction that “in from three to eight years we will have a machine with the general intelligence of an average human being,” we still consider that a feat of science fiction.

The pioneers of artificial intelligence were surely off on the timing, but they weren’t wrong; AI is coming. It is going to be in our TV sets and driving our cars; it will be our friend and personal assistant; it will take the role of our doctor. There have been more advances in AI over the past three years than there were in the previous three decades.

Even technology leaders such as Apple have been caught off guard by the rapid evolution of machine learning, the technology that powers AI. At its recent Worldwide Developers Conference, Apple opened up its AI systems so that independent developers could help it create technologies that rival what Google and Amazon have already built. Apple is way behind.

The AI of the past used brute-force computing to analyze data and present them in a way that seemed human. The programmer supplied the intelligence in the form of decision trees and algorithms. Imagine that you were trying to build a machine that could play tic-tac-toe. You would give the computer specific rules on what move to make, and it would follow them. That is essentially how IBM’s Big Blue computer beat chess Grandmaster Garry Kasparov in 1997, by using a supercomputer to calculate every possible move faster than he could.

See also: AI: Everywhere and Nowhere (Part 2)

Today’s AI uses machine learning, in which you give it examples of previous games and let it learn from those examples. The computer is taught what to learn and how to learn and makes its own decisions. What’s more, the new AIs are modeling the human mind itself, using techniques similar to our learning processes. Before, it could take millions of lines of computer code to perform tasks such as handwriting recognition. Now it can be done in hundreds of lines. What is required is a large number of examples so that the computer can teach itself.

The new programming techniques use neural networks — which are modeled on the human brain, in which information is processed in layers and the connections between these layers are strengthened based on what is learned. This is called deep learning because of the increasing numbers of layers of information that are processed by increasingly faster computers. Deep learning is enabling computers to recognize images, voice and text — and to do human-like things.

Google searches used to use a technique called PageRank to come up with their results. Using rigid proprietary algorithms, they analyzed the text and links on Web pages to determine what was most relevant and important. Google is replacing this technique in searches and most of its other products with algorithms based on deep learning, the same technologies that it used to defeat a human player at the game Go. During that extremely complex game, observers were themselves confused as to why their computer had made the moves it had.

In the fields in which it is trained, AI is now exceeding the capabilities of humans.

AI has applications in every area in which data are processed and decisions required. Wired founding editor Kevin Kelly likened AI to electricity: a cheap, reliable, industrial-grade digital smartness running behind everything. He said that it “will enliven inert objects, much as electricity did more than a century ago.  Everything that we formerly electrified we will now ‘cognitize.’ This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species.There is almost nothing we can think of that cannot be made new, different or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 start-ups are easy to forecast: Take X and add AI. This is a big deal, and now it’s here.”

See also: AI: The Next Stage in Healthcare  

AI will soon be everywhere. Businesses are infusing AI into their products and helping them analyze the vast amounts of data they are gathering. Google, Amazon and Apple are working on voice assistants for our homes that manage our lights, order our food and schedule our meetings. Robotic assistants such as Rosie from “The Jetsons” and R2-D2 of Star Wars are about a decade away.

Do we need to be worried about the runaway “artificial general intelligence” that goes out of control and takes over the world? Yes — but perhaps not for another 15 or 20 years. There are justified fears that rather than being told what to learn and complementing our capabilities, AIs will start learning everything there is to learn and know far more than we do. Though some people, such as futurist Ray Kurzweil, see us using AI to augment our capabilities and evolve together, others, such as Elon Musk and Stephen Hawking, fear that AI will usurp us. We really don’t know where all this will go.

What is certain is that AI is here and making amazing things possible.

FBI

Apple v. FBI: Inevitable Conflicts on Tech

The battle between the FBI and Apple over the unlocking of a terrorist’s iPhone will likely require Congress to create legislation. That’s because there really aren’t any existing laws that encompass technologies such as these. The battle is between security and privacy, with Silicon Valley fighting for privacy. The debates in Congress will be ugly, uninformed and emotional. Lawmakers won’t know which side to pick and will flip flop between what lobbyists ask and the public’s fear du jour. Because there is no consensus on what is right or wrong, any decision legislators make today will likely be changed tomorrow.

This fight is a prelude of things to come, not only with encryption technologies but everything from artificial intelligence to drones, robotics and synthetic biology. Technology is moving faster than our ability to understand it, and there is no consensus on what is ethical. It isn’t just that the lawmakers are not well-informed, the originators of the technologies themselves don’t understand the full ramifications of what they are creating. They may take strong positions today based on their emotions and financial interests, but, as they learn more, they, too, will change their views.

Imagine if there was a terror attack in Silicon Valley — at the headquarters of Facebook or Apple. Do you think that Tim Cook or Mark Zuckerberg would continue to put privacy ahead of national security?

It takes decades, sometimes centuries, to reach the type of consensus that is needed to enact the far-reaching legislation that Congress will have to consider. Laws are essentially codified ethics, a consensus that is reached by society on what is right and wrong. This happens only after people understand the issues and have seen the pros and cons.

Consider our laws on privacy. These date back to the late 1800s, when newspapers started publishing gossip. They wrote a series of intrusive stories about Boston lawyer Samuel Warren and his family. This led his law partner, future U.S. Supreme Court Justice Louis Brandeis, to write a Harvard Law Review article, “The Right of Privacy,” which argued for the right to be left alone. This essay laid the foundation of American privacy law, which evolved over 200 years. It also took centuries to create today’s copyright laws, intangible property rights and contract law. All of these followed the development of technologies such as the printing press and steam engine.

Today, technology is progressing on an exponential curve; advances that would take decades now happen in years, sometimes months. Consider that the first iPhone was released in June 2007. It was little more than an iPod with an embedded cell phone. This has evolved into a device that captures our deepest personal secrets, keeps track of our lifestyles and habits and is becoming our health coach and mentor. It was inconceivable just five years ago that there could be such debates about unlocking this device.

A greater privacy risk than the lock on the iPhone are the cameras and sensors that are being placed everywhere. There are cameras on our roads, in public areas and malls and in office buildings. One company just announced that it is partnering with AT&T to track people’s travel patterns and behaviors through their mobile phones so that its billboards can display personalized ads. Even billboards will also include cameras to watch the expressions of passersby.

Cameras often record everything that is happening. Soon there will be cameras looking down at us from drones and in privately owned microsatellites. Our TVs, household appliances and self-driving cars will be watching us. The cars will also keep logs of where we have been and make it possible to piece together who we have met and what we have done — just as our smartphones can already do. These technologies have major security risks and are largely unregulated. Each has its nuances and will require different policy considerations.

The next technology that will surprise, shock and scare the public is gene editing.  CRISPR–Cas9 is a system for engineering genomes that was simultaneously developed by teams of scientists at different universities. This technology, which has become inexpensive enough for labs all over the world to use, allows the editing of genomes—the basic building blocks of life. It holds the promise of providing cures for genetic diseases, creating drought-resistant and high-yield plants and producing new sources of fuel. It can also be used to “edit” the genomes of animals and human beings.

China is leading the way in creating commercial applications for CRISPR, having edited goats, sheep, pigs, monkeys and dogs. It has given them larger muscles and more fur and meat and altered their shapes and sizes. Scientists demonstrated that these traits can be passed to future generations, creating a new species. China sees this editing as a way to feed its billion people and provide it a global advantage.

China has also made progress in creating designer babies. In April 2015, scientists in China revealed that they had tried using CRISPR to edit the genomes of human embryos. Although these embryos could not develop to term, viable embryos could one day be engineered to cure disease or provide desirable traits. The risk is that geneticists with good intentions could mistakenly engineer changes in DNA that generate dangerous mutations and cause painful deaths.

In December 2015, an international group of scientists gathered at the National Academy of Sciences to call for a moratorium on making inheritable changes to the human genome until there is a “broad societal consensus about the appropriateness” of any proposed change. But then, this February the British government announced that it has approved experiments by scientists at Francis Crick Institute to treat certain cases of infertility. I have little doubt that these scientists will not cross any ethical lines. But is there anything to stop governments themselves from surreptitiously working to develop a race of superhuman soldiers?

The creators of these technologies usually don’t understand the long-term ramifications of what they are creating, and, when they do, it is often too late, as was the case with CRISPR. One of its inventors, Jennifer Doudna, wrote a touching essay in the December issue of Nature. “I was regularly lying awake at night wondering whether I could justifiably stay out of an ethical storm that was brewing around a technology I had helped to create,” she lamented. She has called for human genome editing to “be on hold pending a broader societal discussion of the scientific and ethical issues surrounding such use.”

A technology that is far from being a threat is artificial intelligence. Yet it is stirring deep fears. AI is, today, nothing more than brute-force computing, with superfast computers crunching massive amounts of data. Yet it is advancing so fast that tech luminaries such as Elon Musk, Bill Gates and Stephen Hawking worry it will evolve beyond human capability and become an existential threat to mankind. Others fear that it will create wholesale unemployment. Scientists are trying to come to a consensus about how AI can be used in a benevolent way, but, as with CRISPR, how can you regulate something that anyone, anywhere, can develop?

And soon, we will have robots that serve us and become our companions. These, too, will watch everything that we do and raise new legal and ethical questions. They will evolve to the point that they seem human. What happens, then, when a robot asks for the right to vote or kills a human in self-defense?

Thomas Jefferson said in 1816, “Laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths disclosed, and manners and opinions change with the change of circumstances, institutions must advance also, and keep pace with the times.” But how can our policy makers and institutions keep up with the advances when the originators of the technologies themselves can’t?

There is no answer to this question.

The Robocalypse for Knowledge Jobs

Long-time Costa Rican National Champion Bernal Gonzalez told a very young me in 1994 that the world’s best chess-playing computer wasn’t quite strong enough to be among the top 100 players in the world.

Technology can advance exponentially, and just three years later world champion Garry Kasparov was defeated by IBM’s chess playing supercomputer Deep Blue. But chess is a game of logic where all potential moves are sharply defined and a powerful enough computer can simulate many moves ahead.

t1

Things got much more interesting in 2011, when IBM’s Jeopardy-playing computer Watson defeated Ken Jennings, who held the record of winning 74 Jeopardy matches in a row, and Brad Rutter, who has won the most money on the show. Winning at Jeopardy required Watson to understand clues in natural spoken language, learn from its own mistakes, buzz in and answer in natural language faster than the best Jeopardy-playing humans. According to IBM, ”more than 100 different techniques are used to analyze natural language, identify sources, find and generate hypotheses, find and score evidence and merge and rank hypotheses.” Now that’s impressive — and much more worrisome for those employed as knowledge workers.

t2

What do game-playing computers have to do with white collar, knowledge jobs? Well, Big Blue didn’t spend $1 billion developing Watson just to win a million bucks playing Jeopardy. It was a proof of concept and a marketing move. A computer that can understand and respond in natural language can be adapted to do things we currently use white collar, educated workers to do, starting with automating call centers and, sooner rather than later, moving on up to more complex, higher-level roles, just like we have seen with automation of blue collar jobs.

In the four years since its Jeopardy success, Watson has continued advancing and is now being used for legal research and to help hospitals provide better care. And Watson is just getting started. Up until very recently, the cost of using this type of technology was in the millions of dollars, making it unlikely that any but the largest companies could make the business case to replace knowledge jobs with AIs (artificial intelligence). In late 2013, IBM put Watson “on the cloud,” meaning that you can now rent Watson time without having to buy the very expensive servers.

Watson is cool but requires up-front programming of apps for very specific activities and, while incredibly smart, lacks any sort of emotional intelligence, making it uncomfortable for regular people to deal with it. In other words, even if you spent the millions of dollars to automate your call center with Watson, it wouldn’t be able to connect with your customer, because it has no sense of emotions. It would be like having Data answering your phones.

Then came Amelia…

t3

Amelia is an AI platform that aims to automate business processes that up until now had required educated human labor. She’s different from Watson in many ways that make her much better-suited to actually replace you at the office. According to IPsoft, Amelia aims at working alongside humans to “shoulder the burden of tedious, often laborious tasks.”

She doesn’t require expensive up-front programming to learn how to do a task and is hosted on the cloud, so there is no need to buy million-dollar servers. To train her, you literally feed her your entire set of employee training manuals, and she reads and digests them in a matter of a few seconds. Literally, just upload the text files, and she can grasp the implications and apply logic to make connections between the concepts. Once she has that, she can start working customer emails and phone calls and even recognize what she doesn’t know and search the Internet and the company’s intranet to find an answer. If she can’t find an answer, then she’ll transfer the customer to a human employee for help. You can even let her listen to any conversations she doesn’t handle herself, and she literally learns how to do the job from the existing staff, like a new employee would, except exponentially faster and with perfect memory. She also is fluent in 20 languages.

Like Watson, Amelia learns from every interaction and builds a mind-map that eventually is able to handle just about anything your staff handled before. Her most significant advantage is that Amelia has an emotional component to go with her super brains. She draws on research in the field of affective computing, “the study of the interaction between humans and computing systems capable of detecting and responding to the user’s emotional state.” Amelia can read your facial expressions, gestures, speech and even the rhythm of your keystrokes to understand your emotional state, and she can respond accordingly in a way that will make you feel better. Her EQ is modeled in a three-dimensional space of pleasure, arousal and dominance through a modeling system called PAD. If you’re starting to think this is mind-blowing, you are correct!

The magic is in the context. Instead of deciphering words like insurance jargon when a policyholder calls in to add a vehicle or change an address, IPsoft explains that Amelia will engage with the actual question asked. For example, Amelia would understand the same requests that are phrased different but essentially mean the same thing: “My address changed” and “I need to change my address.” Or, “I want to increase my BI limits” and “I need to increase my bodily injury limits”.

Amelia was unveiled in late 2014, after a secretive 16-year-long development process, and is now being tested in the real world at companies like Shell Oil, Accenture, NNT Group and Baker Hughes on a variety of tasks from overseeing a help desk to advising remote workers in the field.

t4

Chetan Dube, long-time CEO of IPSoft, Amelia’s creator, was interviewed by Entrepreneur magazine:

“A large part of your brain is shackled by the boredom and drudgery of everyday existence. […] But imagine if technology could come along and take care of all the mundane chores for you, and allow you to indulge in the forms of creative expression that only the human brain can indulge in. What a beautiful world we would be able to create around us.”

His vision sounds noble, but the reality is that most of the employees whose jobs get automated away by Watson, Amelia and their successors, won’t be able to make the move to higher-level, less mundane and less routine tasks. If you think about it, a big percentage of white collar workers have largely repetitive service type jobs. And even those of us in higher-level roles will eventually get automated out of the system; it’s a matter of time, and less time than you think.

I’m not saying that the technology can or should be stopped; that’s simply not realistic. I am saying that, as a society, there are some important conversations we need to start having about what we want things to look like in 10 to 20 years. If we don’t have those discussions, we are going to end up in a world with very high unemployment, where the very few people who hold large capital and those with the STEM skills to design and run the AIs will do very well, while the other 80-90% of us could potentially be unemployable. This is truly scary stuff, McKinsey predicts that by 2025 technology will take over tasks currently performed by hundreds of millions of knowledge workers. This is no longer science fiction.

As humans, our brains evolved to work linearly, and we have a hard time understanding and predicting change that happens exponentially. For example, merely 30 years ago, it was unimaginable that most people would walk around with a device in their pockets that could perform more sophisticated computing than computers at MIT in the 1950s. The huge improvement in power is a result of exponential growth of the kind explained by Moore’s law, which is the prediction that the number of transistors that fit on a chip will double every two years while the chip’s cost stays constant. There is every reason to believe that AI will see similar exponential growth. Just five years ago, the world’s top AI experts at MIT were confident that cars could never drive themselves, and now Google has proven them wrong. Things can advance unimaginably fast when growth becomes exponential.

t5

Some of the most brilliant minds of our times are sounding the alarm bells. Elon Musk said, “I think we should be very careful about AI. If I had to guess, our biggest existential threat is probably that we are summoning the demon.” Stephen Hawking warned, “The development of full-artificial intelligence could spell the end of the human race.”