Tag Archives: chess

How to Lead Like a Humble Gardener

In business, as in politics and war, we too often yearn for heroic leaders. A smart book argues, instead, that leaders need to act more like humble gardeners.

Stanley McChrystal and his coauthors write in Team of Teams: New Rules of Engagement For A Complex World that “we unrealistically demand the apogee of heroic leadership—omniscient, fearless, virile and reassuring.”

We expect leaders to be like Napoleon, crafting brilliant strategies, deftly maneuvering troops and distributing precise commands—all while looking regal on horseback. We demand high-level strategic vision and an unerring ability to anticipate broad market trends. We celebrate leaders for encyclopedic mastery of every aspect of their business and ridicule them when they do not have it. We expect all this even though we know that it is entirely unrealistic.

See also: Better Way to Think About Leadership  

What’s more, the authors observe, too many leaders compound the problem by trying to live up to this expectation. They strive to stay informed, to always have the right answers and deliver them with force. They construct rigid, hierarchical organizations, which they then try to control like a thousand marionettes on many stages. They fear that failure to do so reflects weakness and irrelevance.

McChrystal, a retired four-star general, tells of his own temptation to view war, the ultimate real-life competition, as if it were like chess—“the ultimate strategic contest.”

Empowered with an extraordinary ability to view the board, and possessing a set of units with unique capabilities, I was tempted to maneuver my forces like chess pieces. I could be Bobby Fischer or Garry Kasparov, driving my relentlessly aggressive campaign toward checkmate… I felt intense pressure to fulfill the role of chess master for which I had spent a lifetime training.

The problem is that the chess metaphor quickly breaks down. Chess is an orderly game, with clear rules and alternating moves between players. In real life, the competition is free to move multiple pieces and pummel you on multiple fronts, without waiting respectfully for your next move. Events unfold faster and with more complexity than one person can master, or for hierarchical decision processes to monitor, assess, decide and act.

The speed and connected nature of the competitive battlefields render both heroic leaders and hierarchical organizations too slow to survive. Instead of heroic leaders, McChrystal argues, we need leaders who act more like humble gardeners.

Master gardeners know they do not actually “grow” tomatoes, squash or beans—they can only foster environments in which the plants do so.

Similarly, leaders need to understand that competitive success cannot depend on move-by-move control. It requires consistent nurturing of the structure, process and culture of one’s organization to enable subordinate components to function with “smart autonomy.”

Smart autonomy is the ability, responsibility and authority of every part of the team to take action as best it sees fit in pursuit of the overall strategy. That doesn’t mean total autonomy, however. Every part of the team must be tightly linked to common strategies and mission. They must be enabled with “shared consciousness” and have ready access to information from across the organization.

See also: Best Insurance? A Leadership Pipeline

Becoming a gardener, rather than a chess master, changes the role of the leader but does not diminish the need for one.

McChrystal argues that leadership is more critical than ever. Here are key elements of “leading like a gardener” that he and his coauthors lay out:

  1. Shift focus from moving pieces to shaping the ecosystem
  2. Create and maintain the teamwork conditions
  3. Keep the team of teams focused on clearly articulated priorities
  4. Demand free-flowing conversation
  5. Reinforce empowered execution
  6. Lead by demonstration
  7. Keep eyes on, hands off

Whether you lead grand armies, a multi-national conglomerate or a small team, Team of Teams is well worth putting on your summer reading list.

AI’s Promise Is Finally Upon Us

We have been hearing predictions for decades of a takeover of the world by artificial intelligence. In 1957, Herbert A. Simon predicted that within 10 years a digital computer would be the world’s chess champion. That didn’t happen until 1996. And despite Marvin Minsky’s 1970 prediction that “in from three to eight years we will have a machine with the general intelligence of an average human being,” we still consider that a feat of science fiction.

The pioneers of artificial intelligence were surely off on the timing, but they weren’t wrong; AI is coming. It is going to be in our TV sets and driving our cars; it will be our friend and personal assistant; it will take the role of our doctor. There have been more advances in AI over the past three years than there were in the previous three decades.

Even technology leaders such as Apple have been caught off guard by the rapid evolution of machine learning, the technology that powers AI. At its recent Worldwide Developers Conference, Apple opened up its AI systems so that independent developers could help it create technologies that rival what Google and Amazon have already built. Apple is way behind.

The AI of the past used brute-force computing to analyze data and present them in a way that seemed human. The programmer supplied the intelligence in the form of decision trees and algorithms. Imagine that you were trying to build a machine that could play tic-tac-toe. You would give the computer specific rules on what move to make, and it would follow them. That is essentially how IBM’s Big Blue computer beat chess Grandmaster Garry Kasparov in 1997, by using a supercomputer to calculate every possible move faster than he could.

See also: AI: Everywhere and Nowhere (Part 2)

Today’s AI uses machine learning, in which you give it examples of previous games and let it learn from those examples. The computer is taught what to learn and how to learn and makes its own decisions. What’s more, the new AIs are modeling the human mind itself, using techniques similar to our learning processes. Before, it could take millions of lines of computer code to perform tasks such as handwriting recognition. Now it can be done in hundreds of lines. What is required is a large number of examples so that the computer can teach itself.

The new programming techniques use neural networks — which are modeled on the human brain, in which information is processed in layers and the connections between these layers are strengthened based on what is learned. This is called deep learning because of the increasing numbers of layers of information that are processed by increasingly faster computers. Deep learning is enabling computers to recognize images, voice and text — and to do human-like things.

Google searches used to use a technique called PageRank to come up with their results. Using rigid proprietary algorithms, they analyzed the text and links on Web pages to determine what was most relevant and important. Google is replacing this technique in searches and most of its other products with algorithms based on deep learning, the same technologies that it used to defeat a human player at the game Go. During that extremely complex game, observers were themselves confused as to why their computer had made the moves it had.

In the fields in which it is trained, AI is now exceeding the capabilities of humans.

AI has applications in every area in which data are processed and decisions required. Wired founding editor Kevin Kelly likened AI to electricity: a cheap, reliable, industrial-grade digital smartness running behind everything. He said that it “will enliven inert objects, much as electricity did more than a century ago.  Everything that we formerly electrified we will now ‘cognitize.’ This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species.There is almost nothing we can think of that cannot be made new, different or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 start-ups are easy to forecast: Take X and add AI. This is a big deal, and now it’s here.”

See also: AI: The Next Stage in Healthcare  

AI will soon be everywhere. Businesses are infusing AI into their products and helping them analyze the vast amounts of data they are gathering. Google, Amazon and Apple are working on voice assistants for our homes that manage our lights, order our food and schedule our meetings. Robotic assistants such as Rosie from “The Jetsons” and R2-D2 of Star Wars are about a decade away.

Do we need to be worried about the runaway “artificial general intelligence” that goes out of control and takes over the world? Yes — but perhaps not for another 15 or 20 years. There are justified fears that rather than being told what to learn and complementing our capabilities, AIs will start learning everything there is to learn and know far more than we do. Though some people, such as futurist Ray Kurzweil, see us using AI to augment our capabilities and evolve together, others, such as Elon Musk and Stephen Hawking, fear that AI will usurp us. We really don’t know where all this will go.

What is certain is that AI is here and making amazing things possible.

AI: Everywhere and Nowhere (Part 1)

This is part 1 of a three-part series.

Artificial Intelligence (AI) is all the rage in the popular press. Even if you are an alien who just landed on Earth from a planet far away, it was impossible to miss the headlines that AlphaGo—the AI program developed by Google—beat the world champion of the game Go, Lee Sedol, 4-1.

Why is there such excitement about this AI program beating the human champion? What is in fact AI, or artificial intelligence? What does this all mean to our businesses or each one of us? To be even more melodramatic – what does this mean for humanity?

AI Defined (Really?)

Since the term was first coined in 1956, AI has suffered from shifting definitions.

The term “artificial intelligence” was first used at the second Dartmouth conference organized by John McCarthy, one of the founding fathers of AI.  Most definitions of AI revolve around “simulation of intelligent behavior by computers.” However, one of the most popular AI textbooks took AI to another level.

In “Artificial Intelligence: A Modern Approach,” Stuart Russell and Peter Norvig define AI as the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment. This view of AI brings together a number of distinct subfields of computer vision, speech processing, natural language understanding, reasoning, knowledge representation, learning and robotics with the aim of achieving an outcome by the machine.

As AI has evolved, it has also splintered. As soon as any subfield of AI is well understood, it gets renamed, and whatever is still to be discovered gets branded as AI. For example, handwriting recognition or voice recognition was once considered AI. However, with the availability of commercial systems that can recognize written text or recognize human speech, these areas are no longer considered AI. As a result, any precise definition of AI is fraught with the danger that the definition becomes obsolete as technology advances take place.

See also: Seriously? Artificial Intelligence?

Given the difficulty of defining “intelligence” and hence “artificial intelligence,” the field of AI has resorted to beating humans in games where the humans exhibit a lot of thinking, learning or physical activity. As a result, over the past couple of decades, we have seen AI beat the best humans in chess, Jeopardy and now Go.

There are also games like soccer where a group of robots train to beat the soccer world champions one day. While beating the best humans at their own game—thinking and learning–is a laudable goal, gaming situations differ from a majority of our day-to-day activity in significant ways.

First, these games have a prescribed set of rules and well-defined and certain outcomes (e.g., win, loss or tie). Second, these games are closed-loop systems where the effect of the actions is limited to participants within the system. Third, the AI can be trained with multiple failures (e.g., losing the game) with no real consequences to participants outside the system.

Needless to say, these situations are not very common outside of the games, and the hoopla surrounding AI and games perpetuates the confusion about AI’s ultimate mission. While it is great to see that what was once considered close to impossible just two years back – beating the world champion of Go – has now been achieved, the implications of this achievement for the broader application of AI needs to be kept in perspective. It is one more feather in the cap of “deep learning,” the mechanism that AlphaGo used to beat Lee Sedol. However, the excitement of the win needs to be tempered by the daunting and challenging situations that AI software still needs to operate under.