Tag Archives: sergey brin

Where Silicon Valley Is Wrong on Innovation

Silicon Valley exemplifies the saying, “The more things change, the more they stay the same.” Very little has changed over the past decade, with the Valley still mired in myth and stale stereotype. Ask any older entrepreneurs or women who have tried to get financing; they will tell you of the walls they keep hitting. Speak to VCs, and you will realize they still consider themselves kings and kingmakers.

With China’s innovation centers nipping at the Valley’s heels, and with the innovation centers that Steve Case calls “the rest” on the rise, it is time to dispel some of Silicon Valley’s myths.

Myth 1: Only the young can innovate

The words of one Silicon Valley VC will stay with me always. He said: “People under 35 are the people who make change happen, and those over 45 basically die in terms of new ideas.” VCs are still looking for the next Mark Zuckerberg.

The bias persists despite clear evidence that the stereotype is wrong. My research in 2008 documented that the average and median age of successful technology company founders in the U.S. is 40. And several subsequent studies have made the same findings. Twice as many of these founders are older than 50 as are younger than 25; twice as many are over 60 as are under 20. The older, experienced entrepreneurs have the greatest chances of success.

Don’t forget that Marc Benioff was 35 when he founded Salesforce.com; Reid Hoffman 36 when he founded LinkedIn. Steve Jobs’s most significant innovations at Apple — the iMac, iTunes, iPod, iPhone and iPad — came after he was 45. Qualcomm was founded by Irwin Jacobs when he was 52 and by Andrew Viterbi when he was 50. The greatest entrepreneur today, transforming industries including transportation, energy and space, is Elon Musk; he is 47.

See also: Innovation: ‘Where Do We Start?’  

Myth 2: Entrepreneurs are born, not made

There is a perennial debate about who can be an entrepreneur. Jason Calacanis proudly proclaimed that successful entrepreneurs come from entrepreneurial families and start off running lemonade stands as kids. Fred Wilson blogged about being shocked when a professor told him you could teach people to be entrepreneurs. “I’ve been working with entrepreneurs for almost 25 years now,” he wrote, “and it is ingrained in my mind that someone is either born an entrepreneur or is not.”

Yet my teams at Duke and Harvard had documented that the majority, 52%, of Silicon Valley entrepreneurs were the first in their immediate families to start a business. Only a quarter of the sample we surveyed had caught the entrepreneurial bug when in college. Half hadn’t even thought about entrepreneurship even then.

Mark Zuckerberg, Steve Jobs, Bill Gates, Jeff Bezos, Larry Page, Sergey Brin and Jan Koum didn’t come from entrepreneurial families. Their parents were dentists, academics, lawyers, factory workers or priests.

Anyone can be an entrepreneur, especially in this era of exponentially advancing technologies, in which a knowledge of diverse technologies is the greatest asset.

Myth 3: Higher education provides no advantage

Thiel made headlines in 2011 with his announcement that he would pay teenagers $100,000 to quit college and start businesses. He made big claims about how these dropouts would solve the problems of the world. Yet his foundation failed in that mission and quietly refocused its efforts and objectives to providing education and networking. As Wired reported, “Most (Thiel fellows) are now older than 20, and some have even graduated college. Instead of supplying bright young minds with the space and tools to think for themselves, as Thiel had originally envisioned, the fellowship ended up providing something potentially more valuable. It has given its recipients the one thing they most lacked at their tender ages: a network.”

This came as no surprise. Education and connections are essential to success. As our research at Duke and Harvard had shown, companies founded by college graduates have twice the sales and twice the employment of companies founded by others. What matters is that the entrepreneur complete a baseline of education; the field of education and ranking of the college don’t play a significant role in entrepreneurial success. Founder education reduces business-failure rates and increases profits, sales and employment.

Myth 4: Women can’t succeed in tech

Women-founded firms receive hardly any venture-capital investments, and women still face blatant discrimination in the technology field. Tech companies have promised to narrow the gap, but there has been insignificant progress.

This is despite the fact that, according to 2017 Census Bureau data, women earn more than two-thirds of all master’s degrees, three-quarters of professional degrees and 80% of doctoral degrees. Not only do girls surpass boys on reading and writing in almost every U.S. school district, they often outdo boys in math — particularly in racially diverse districts.

Earlier research by my team revealed there are also no real differences in success factors between men and women company founders: both sexes have exactly the same motivations, are of the same age when founding their startups, have similar levels of experience and equally enjoy the startup culture.

Other research has shown that women actually have the advantage: that women-led companies are more capital-efficient, and venture-backed companies run by a woman have 12% higher revenues, than others. First Round Capital found that companies in its portfolio with a woman founder performed 63% better than did companies with entirely male founding teams.

See also: Innovation — or Just Innovative Thinking?  

Myth 5: Venture capital is a prerequisite for innovation

Many would-be entrepreneurs believe they can’t start a company without VC funding. That reflected reality a few years ago, when capital costs for technology were in the millions of dollars. But it is no longer the case.

A $500 laptop has more computing power today than a Cray 2 supercomputer, costing $17.5 million, did in 1985. For storage, back then, you needed server farms and racks of hard disks, which cost hundreds of thousands of dollars and required air-conditioned data centers. Today, one can use cloud computing and cloud storage, costing practically nothing.

With the advances in robotics, artificial intelligence and 3D printing, the technologies are becoming cheaper, no longer requiring major capital outlays for their development. And if entrepreneurs develop new technologies that customers need or love, money will come to them, because venture capital always follows innovation.

Venture capital has become less relevant than ever to startup founders.

Is AI the End of Jobs or a Beginning?

Artificial intelligence (AI) is advancing so rapidly that even its developers are being caught off guard. Google co-founder Sergey Brin said in Davos, Switzerland, in January that it “touches every single one of our main projects, ranging from search to photos to ads … everything we do … it definitely surprised me, even though I was sitting right there.”

The long-promised AI, the stuff we’ve seen in science fiction, is coming, and we need to be prepared. Today, AI is powering voice assistants such as Google Home, Amazon Alexa and Apple Siri, allowing them to have increasingly natural conversations with us and manage our lights, order food and schedule meetings. Businesses are infusing AI into their products to analyze the vast amounts of data and improve decision-making. In a decade or two, we will have robotic assistants that remind us of Rosie from “The Jetsons” and R2-D2 of “Star Wars.”

See also: Seriously? Artificial Intelligence?  

This has profound implications for how we live and work, for better and worse. AI is going to become our guide and companion — and take millions of jobs away from people. We can deny this is happening, be angry or simply ignore it. But, if we do, we will be the losers. As I discussed in my new book, “Driver in the Driverless Car,” technology is now advancing on an exponential curve and making science fiction a reality. We can’t stop it. All we can do is to understand it and use it to better ourselves — and humanity.

Rosie and R2-D2 may be on their way, but AI is still very limited in its capability, and will be for a long time. The voice assistants are examples of what technologists call narrow AI: systems that are useful, can interact with humans and bear some of the hallmarks of intelligence — but would never be mistaken for a human.  They can, however, do a better job on a very specific range of tasks than humans can. I couldn’t, for example, recall the winning and losing pitcher in every baseball game of the major leagues from the previous night.

Narrow-AI systems are much better than humans at accessing information stored in complex databases, but their capabilities exclude creative thought. If you asked Siri to find the perfect gift for your mother for Valentine’s Day, Siri might make a snarky comment but couldn’t venture an educated guess. If you asked her to write your term paper on the Napoleonic Wars, she couldn’t help. That is where the human element comes in and where the opportunities are for us to benefit from AI — and stay employed.

In his book “Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins,” chess grandmaster Garry Kasparov tells of his shock and anger at being defeated by IBM’s Deep Blue supercomputer in 1997. He acknowledges that he is a sore loser but was clearly traumatized by having a machine outsmart him. He was aware of the evolution of the technology but never believed it would beat him at his own game. After coming to grips with his defeat, 20 years later, he says fail-safes are required … but so is courage.

Kasparov wrote: “When I sat across from Deep Blue 20 years ago, I sensed something new, something unsettling. Perhaps you will experience a similar feeling the first time you ride in a driverless car, or the first time your new computer boss issues an order at work. We must face these fears in order to get the most out of our technology and to get the most out of ourselves. Intelligent machines will continue that process, taking over the more menial aspects of cognition and elevating our mental lives toward creativity, curiosity, beauty and joy. These are what truly make us human, not any particular activity or skill like swinging a hammer — or even playing chess.”

In other words, we better get used to AI and ride the wave.

Human superiority over animals is based on our ability to create and use tools. The mental capacity to make things that improved our chances of survival led to a natural selection of better toolmakers and tool users. Nearly everything a human does involves technology. For adding numbers, we used abacuses and mechanical calculators and now have spreadsheets. To improve our memory, we wrote on stones, parchment and paper, and now have disk drives and cloud storage.

AI is the next step in improving our cognitive functions and decision-making.

Think about it: When was the last time you tried memorizing your calendar or Rolodex or used a printed map? Just as we instinctively do everything on our smartphones, we will rely on AI. We may have forfeited skills such as the ability to add up the price of our groceries, but we are smarter and more productive. With the help of Google and Wikipedia, we can be experts on any topic, and these don’t make us any dumber than encyclopedias, phone books and librarians did.

A valid concern is that dependence on AI may cause us to forfeit human creativity. As Kasparov observes, the chess games on our smartphones are many times more powerful than the supercomputers that defeated him, yet this didn’t cause human chess players to become less capable — the opposite happened. There are now stronger chess players all over the world, and the game is played in a better way.

See also: Microinsurance? Let’s Try Macroinsurance  

As Kasparov explains: “It used to be that young players might acquire the style of their early coaches. If you worked with a coach who preferred sharp openings and speculative attacking play himself, it would influence his pupils to play similarly. … What happens when the early influential coach is a computer? The machine doesn’t care about style or patterns or hundreds of years of established theory. It counts up the values of the chess pieces, analyzes a few billion moves, and counts them up again. It is entirely free of prejudice and doctrine. … The heavy use of computers for practice and analysis has contributed to the development of a generation of players who are almost as free of dogma as the machines with which they train.”

Perhaps this is the greatest benefit that AI will bring — humanity can be free of dogma and historical bias; it can do more intelligent decision-making. And instead of doing repetitive data analysis and number crunching, human workers can focus on enhancing their knowledge and being more creative.

Theranos: A Hard Lesson for Innovators

The Theranos saga hit another low when the company informed regulators that it was voiding two years of tests from its Edison blood testing devices and sending of tens of thousands of revised test results to doctors. This means thousands of patients received incorrect results and were likely given incorrect treatments. 

These doctors and patients trusted Theranos, relying upon the brand value of the gilded names the company promoted as its governance oversight, presuming somebody truly conducted some genuine, diligent reviews. These names included diplomatic and military titans such as two former U.S. secretaries of state (Henry Kissinger and George Schultz), former U.S. senators (Sam Nunn and Bill Frist), a former U.S. secretary of defense (William Perry) and, surprisingly, the tough-minded former CEO of Wells Fargo, Richard Kovacevich.

Didn’t Theranos CEO Elizabeth Holmes and her executive team realize they were risking lives by using unproven and faulty equipment? Didn’t the all-star board ask tough questions about the workings of the technology? Didn’t the leaders understand that ethics is a slippery slope, that once you compromise there is no turning back?

Sadly, we have seen too many  ethical lapses and too much lack of disclosure to shareholders in the technology world. We have written about Silicon Valley’s careless and arrogant frat-boy culture; warned Uber’s CEO he risked being known as a modern-day robber baron for his dubious business practices; and battled tech titans who pay children to drop out of school before they have developed important social skills and ethical values.

See also: The State of Ethics in Insurance

We can list more than 50 tech firms that died when their governance failed long before their technology. One example was Informix — a fallen star of Silicon Valley and a darling of Wall Street. Founded in 1980, it towered over its rivals Oracle and Sybase as the first of the database giants to offer object relational database support with superior multi-media storage built in. Nonetheless, its poor governance drowned out its technological triumphs, as misstated revenue recognition and accounting fraud led to the imprisonment of celebrity CEO Phil White and to the firm’s ultimate collapse.

Silicon Valley often thinks it can live by a different set of rules than corporate America because it is developing world-changing innovations and because start-ups need the freedom to innovate. Yes, we need to allow entrepreneurs to take risks and break some rules so they can do their magic. But these rules cannot be ethical ones. The lines on ethics are usually clear, as they were with Theranos, and there can be no compromise.

Profiteers are always ready to exploit markets fueled by hope, hype and emotion.  Here are some lessons:

1. Question the over-hyped founders.

Theranos’s CEO notoriously chased testimonial media appearances and self-aggrandizing promotional materials and strutted before cheering and unquestioning audiences of wannabe disrupters at TED talks. Instead, look for leaders who engage in debate with people who understand the core technology and may fortify or enhance the original concept. If you look at some of the biggest and most successful companies, some of the most vital names — Robert Noyce at Intel; Paul Allen at Microsoft, Steve Wozniak at Apple, David Filo of Yahoo; Sergey Brin of Google; etc. — are not the names attached to the company by the media, but, of course, they were crucial in each firm’s future technical, commercial and moral trajectory. The wisdom of Abraham Lincoln’s Team of Rivals has value beyond politics.

2. Beware of leaders who hide behind the cloaks of marquee names.

Celebrity roll-ups are used as governance smoke screens from substance. It seems too obvious to state what must yet still be stated, that boards must be recruited from the ranks of those with relevant skill and knowledge, not from the gossip pages. The three board members who seemed to understand Theranos’s technology quit en masse three years ago.

3. Dissent is not disloyalty.

Tech leaders should embrace outside critics and listen to internal challenges rather than disparage — and even threaten — dissenters. The chief science at Theranos killed himself, after reportedly telling his wife that the technology did not work. Frustrated internal whistleblowers revealed to The Wall Street Journal that the firm’s celebrated systems were no longer even used for most of the several types of tests they ran.

The boards of start-ups must also be held to higher standards. When they join a board, venture capitalists have a fiduciary duty to represent the interests of all shareholders, not only their funds. While Theranos is not a publicly listed enterprise, members of the board still staked their good names to reassure investors, strategic partners, employees and the public — in this case not just verifying financial health but also physical health.

During the dizzying days on the eve of the dot-com crash, many innovative firms skyrocketed as they disrupted the defensive old order. Anyone who questioned the hype was trashed as a neo-Luddites defending the past. Prominent governance apologists celebrated “e-board governance,” a self-righteous term replacing traditional diligent governance. Such new-age board oversight encouraged venture capitalist service on scores of boards, misleading pro forma financial reports, backdating stock options, illegally booked barter deals and following other reckless practices while waving away oversight through marquee names. Two decades later, “the Valley” should ascend from such governance lowlands.

Jeffrey A. Sonnenfeld, a professor at the Yale School of Management, is the co-author of this article.

Healthy Disrespect for the Impossible

When people are extraordinarily successful, examining their characteristics, values and attitudes can be instructive. The rest of us can learn from them and possibly adopt some of them to advance our own goals. Larry Page, co-founder of Google is an example of one who has achieved exceptional heights. Peering into his thought process can be enlightening.

Page says, “Have a healthy disrespect for the impossible.”

To conceive and develop the Google concept and then the massive company, its young founders had to have a very healthy disrespect for the impossible. Others besmirched the idea of collecting all the information in the world and then making it available to everyone in the world. Not only was it a bold idea, it was thought by most to be ridiculous and impossible. But Larry Page and Sergey Brin had a very healthy disrespect for the impossible. They made it happen.

The concept of disrespecting the impossible could be entertained by those of us in the workers’ compensation industry. True, few of us are likely to reach the pinnacle level of Larry and Sergey, but we can borrow some of their bold thinking to get past the assumptions and barriers that keep us from achieving more.

Everyone agrees workers’ compensation as an industry needs a healthy nudge to try new things. The industry is known for its resistance to change. Maybe the way to change the industry, to be an industry disruptor, is to begin with an attitude of disrespecting the impossible.

Many people, including those in the workers’ compensation industry, focus on why something cannot be done. Reasons for this notion are many, but probably cultural tradition plays a role. Inventiveness is not expected or appreciated. Too often, the best way to keep a job in corporations is to keep your head down and avoid being noticed. Spearheading a new ideas is risky.

Stonewalling new ideas or doing things differently or adopting new technology in an organization thwarts creative thought and certainly diverts progress. I was once told that to incorporate a very good product would mean doing things differently in the organization. So the answer was automatically no!

We all know the old saying about the word “ass-u-me.” It actually packs some truth. To avoid the trap, check assumptions for veracity. Incorrect assumptions can be highly self-limiting.

Begin the process of problem-solving with new thinking — disrespect the impossible. What could be done if the perceived barriers did not exist? What could be accomplished if new methods were implemented.

Probably the most important ingredient for achievement in any context is tenacity. It’s easy to quit when the barriers seem daunting. Tenacity combined with a disrespect for the impossible might be unbeatable.

Fasten Your Seatbelts: Driverless Cars Change Everything (Part 1)

In fact, the driverless car has broad implications for society, for the economy and for individual businesses. Just in the U.S., the car puts up for grab some $2 trillion a year in revenue and even more market cap. It creates business opportunities that dwarf Google’s current search-based business and unleashes existential challenges to market leaders across numerous industries, including car makers, auto insurers, energy companies and others that share in car-related revenue.

Because people consistently underestimate the implications of a change in technology — are you listening, Kodak, Blockbuster, Borders, Sears, etc.? — and because many industries face the kind of disruption that may beset the auto industry, I’m going to do a series of blogs on the ripple effects that the driverless car may create. I’m hoping both to dramatize the effects of a disruptive technology and to illustrate how to think about the dangers and the opportunities that one creates.

In this installment, I’ll start the series with a broad-brush look at the far-reaching changes that could occur from the driver’s standpoint. In the next installment, I’ll show just how far the ripples will reach for companies—not just car makers, but insurers, hospitals, parking lot operators and even governments and utilities. (Fines drop when every car obeys the law, and roads don’t need to be lit if cars can see in the dark).

After that, I’ll explore how real the prospects are for driverless cars. (Hint: The issue is when, not if—and when is sooner than you think.) In the last installment, I’ll go into the strategic implications for every company thinking about innovation in these fast-moving times.

To begin:

Driverless car technology has the very real potential to save millions from death and injury and eliminate hundreds of billions of dollars of costs. Google’s claims for the car, as described by Sebastian Thrun, its lead developer, are:

  1. We can reduce traffic accidents by 90%.
  2. We can reduce wasted commute time and energy by 90%.
  3. We can reduce the number of cars by 90%.

To put those claims in context:

About 5.5 million motor vehicle accidents occurred in 2009 in the U.S., involving 9.5 million vehicles. These accidents killed 33,808 people and injured more than 2.2 million others, 240,000 of whom had to be hospitalized.

Adding up all costs related to accidents — including medical costs, property damage, loss of productivity, legal costs, travel delays and pain and lost quality of life — the American Automobile Association studied crash data in the 99 largest U.S. urban areas and estimated the total costs to be $299.5 billion. Adjusting those numbers to cover the entire country suggests annual costs of about $450 billion.

Now take 90% off these numbers. Google is claiming its car could save almost 30,000 lives each year on U.S. highways and prevent nearly 2 million additional injuries. Google claims it can reduce accident-related expenses by at least $400 billion a year in the U.S. Even if Google is way off — and I don’t believe it is — the improvement in safety will be startling.

In addition, the driverless car would reduce wasted commute time and energy by relieving congestion and allowing cars to go faster, operate closer together and choose more effective routes. One study estimated that traffic congestion wasted 4.8 billion hours and 1.9 billion gallons of fuel a year for urban Americans. That translates to $101 billion in lost productivity and added fuel costs.

The driverless car could reduce the need for cars by enabling efficient sharing of vehicles. A driverless vehicle could theoretically be shared by multiple people, delivering itself when and where it is needed, parking itself in some remote place whenever it’s not in use.

A car is often a person’s second largest capital expenditure, after a home, yet a car sits unused some 95% of the time. With the Google car, people could avoid the outlay of many thousands of dollars, or tens of thousands, on an item that mostly sits and, instead, simply pay by the mile.

A study led by Lawrence Burns and William Jordon at Columbia University’s Earth Institute Program on Sustainable Mobility showed the dramatic cost savings potential. Their analysis found that a shared, driverless fleet could provide far better mobility experiences than personally owned vehicles at far radically lower cost. For medium-sized cities like Ann Arbor, MI, the cost per trip-mile could be reduced by 80% when compared to personally own vehicles driven about 10,000 miles per year — without even factoring in parking and the opportunity cost of driving time. Their analysis showed similar cost savings potential for suburban and high-density urban scenarios, as well.

Driving could become Zipcar writ large (except the car comes to you).

Looking worldwide, the statistics are less precise, but the potential benefits are even more startling. The World Health Organization estimates that more than 1.2 million people are killed on the world’s roads each year, and as many as 50 million others are injured. And the WHO predicts that the problems will only get worse. It estimates that road traffic injuries will become the fifth leading cause of worldwide death by 2030, accounting for 3.6% of the total — rising from the ninth leading cause in 2004, when it accounted for 2.2% of the world total.

If Google could give everyone a world-class electronic driver, it would drastically reduce the deaths, injuries and direct costs of accidents. The driverless car might also save developing countries from ever having to replicate the car-centric infrastructure that has emerged in most Western countries. This leapfrogging has already happened with telephone systems: Developing countries that lacked land-line telephone and broadband connectivity, such as India, made the leap directly to mobile systems rather than build out their land-line infrastructures.

China alone expects to invest almost $800 billion on road and highway construction between 2011 and 2015. It is doubtful, however, whether even this massive investment can keep up with the rising accidents and traffic congestion that the country endures. And road construction won’t deal with the issue of pollution, to which the massive car buildup contributes and which is becoming an ever more politically sensitive issue.

How might China and other developing economic powers’ massive car-related investments be redeployed if fundamental assumptions were viewed through the lens of the driverless car?

In sum, the Google driverless car not only makes for a great demo; it has worldwide social and economic benefits that could amount to trillions of dollars per year.

Insurers will feel major effects because hundreds of billions of dollars of reductions in losses obviously mean reduced requirements for insurance in all sorts of areas: auto, life, P&C, health and more; even workers’ comp needs will diminish because so many claims that would have stemmed from car accidents simply won’t happen. The locus of power in some parts of the insurance industry will shift, too. Why should a driver buy insurance if the car is doing the driving? Instead, car makers will likely take on the responsibility, and perhaps as part of their traditional approach to product liability, rather than working through auto insurance companies as they are currently constituted. I’ll look at those issues and others next time.