The 10 Biggest Mistakes in AI Strategies

Caution is in order whenever a new technology is supposed to take the world by storm. A look at past failures for AI initiatives is instructive. 

Image
An outline of a person's side profile in blue lights with a white light at the center of the brain and connecting lines around the face signifying artificial intelligence; all against a dark background

Way back in 2014, Wired magazine co-founder Kevin Kelly wrote, "The business plans of the next 10,000 startups are easy to forecast: Take X and add AI." Boy, was he right.

That prediction was far bolder than it looks in retrospect. For the preceding nearly 60 years, an AI revolution had been much promised but was always just over the horizon. Even proponents acknowledged that there was "an AI winter." 

But Kelly saw a convergence of new forms of computing power, plus big data and better algorithms, and declared the winter over.

And here we are: A form of AI, best-known through its incarnation in ChatGPT, has captured the world's imagination, and not only every startup but just about every established company is figuring out how to fit generative AI into its business plans. 

But if there's one thing I've learned over my many years of following technology -- beyond that Kevin Kelly is a smart fellow -- it's that caution is in order whenever a new technology is supposed to take the world by storm. Events rarely play out as expected, and mistakes get made in the rush for the gold.

So, I thought I'd share thoughts based on an insightful column I recently read on the 10 biggest mistakes companies make when trying to implement AI. The column doesn't focus on ChatGPT and its rivals, which I know is the topic du jour, but the broad lessons could save a lot of us a bunch of time, effort and money.

The column, by Bernard Marr, which I recommend reading in its entirety, lists these 10 as the biggest stumbles with AI that he's seen in his extensive experience:

  • Lack of clear objectives
  • Failure to adopt a change management strategy
  • Overestimating AI capabilities
  • Not testing and validating AI systems
  • Ignoring ethics and privacy concerns
  • Inadequate talent acquisition and development
  • Neglecting data strategy
  • Inadequate budget and resource allocation
  • Treating AI as a one-time project
  • Not considering scalability

I'd highlight these four: 1) lack of clear objectives; 2) failure to adopt a change management strategy; 3) overestimating AI capabilities; and 4) treating AI as a one-time project. 

Lack of clear objectives

From what I've observed, the biggest issue is that every company -- certainly, every public company -- is being peppered with questions about what its AI strategy is. Not having an AI plan would be like not having a website in 2000 during the first internet boom or not having an app in the 2010s, after Apple made smartphones ubiquitous. So, every company has some sort of AI strategy -- at least, a major AI project. 

But AI is often a technology in search of a problem, and that rarely works, no matter what technology is involved. Companies need to start, as usual, by defining a business problem to be solved. Then, if appropriate, AI can be applied. Just deciding to sprinkle some AI on a business unit or process rarely accomplishes anything, and can be distracting.

For me, two of Marr's other "top 10 problems" -- lack of data strategy and not considering scalability -- fit under this umbrella. A clear AI plan for, say, auto insurance claims needs to start by looking at how AI can streamline the process. But the plan also needs to envision from the get-go how the data gathered fits into the overall corporate data strategy -- such as by being fed into the underwriting process or, perhaps, being shared with car makers so they can improve safety or lower repair costs. In addition, the AI plan needs to map out how the initial work can be scaled. Otherwise, the AI work is more show than substance.

Failure to adopt a change management strategy

Everybody likes change -- except for the change part. And AI, done right, produces major changes in how people work. So, any AI strategy of any scope needs to prepare for the retraining that will have to be done and for resistance to appear. That means those driving the change need to communicate, communicate and communicate, then communicate some more. 

Executives will also need to model the new behavior. Don't expect others to use ChatGPT, for instance, if you don't.

I remember when IBM was selling enough email software in the early 1990s that the CEO decreed that paper memos were out and emails were in. The idea made a lot of sense. In Silicon Valley, the approach is known as eating your own dog food. You get a sense of what your customers are experiencing. But IBM executives -- who mostly didn't know how to type -- had their secretaries type memos as usual, then simply put them in email form. Subordinates weren't fooled, and the mandated move to email fizzled.  

Overestimating AI's capabilities

How easy is it to fall victim to this problem? So easy that even Kevin Kelly got caught, to an extent, in that brilliant article from 2014. He opened the piece caught up in the glow that AI achieved when IBM's Watson beat Ken Jennings at Jeopardy! in 2011 and reported at face value IBM's plans to "send Watson to medical school." But Watson, in its initial incarnation, turned out to be a one-trick pony. It was great at the sort of natural language processing that a contestant needs to do to decipher the clues on Jeopardy! but never came close to deciphering medicine. Kelly also predicted that Google would become so good at AI that, "by 2024, Google's main product will not be search but AI."

AI can be marvelous stuff, but it's really just smart computing. Yes, it can beat Jennings at Jeopardy!, overcome Garry Kasparov at chess and perform all sorts of other marvels in structured environments. But it isn't a better soccer coach than I am -- and I don't even coach soccer. 

It's crucial to focus not just on what AI can do but on what it can't. AI isn't magic.

Treating AI as a one-time project

AI is a funny beast. It isn't really a technology, at least not in the sense that, say, telematics or blockchain is. Historically, AI has always been whatever you could imagine as possible but couldn't quite do yet. When computer scientists conquered whatever the problem was, their work became plain, old computing, and AI was defined as some new aspiration. 

When I first had people start bragging to me about the potential of AI, some 35 years ago, the sorts of things we take for granted weren't even in the realm of possibility. Siri? Are you kidding me? Google Translate? Yeah, right. 

Now, while there's plenty of work being done to keep improving Siri, Google Translate and other such tools, AI has moved on to figuring out how to estimate car damage from photos a driver sends, how to price risk for a life insurance policy without requiring a doctor's appointment and the taking of fluids, etc.

Basically, AI is a treadmill. Once you get on -- as everyone should -- you can't get off. It never stops moving.

Marr's other four points are certainly important -- not testing and validating AI systems; ignoring ethics and privacy concerns; inadequate talent acquisition and development; and inadequate budget and resource allocation -- but I think of those as downstream issues that can be addressed if the strategic umbrella is right.

I came across a great quote the other day in a book about how much Abraham Lincoln did as president to lay the foundation in the U.S. for the development of science. Lincoln wrote: "We always hear of the successes of life & experiment, but scarcely ever of the failures. Were the failures published to the world as well as the successes much brain work & pain work--as well as money & time would be saved."

As usual, I'm with Honest Abe. I recommend we learn as much as we can from the failures to date on AI projects, to clear the way for the many successes that are possible.

Cheers,

Paul