The Smartest Things I've Read Lately About AI

As we move up the learning curve on implementing generative AI, some are challenging, for instance, the idea that AI agents should be treated as employees. 

Image
Fog

My older daughter just lost a writing job to an AI (that she had to train to replace her), so I don't currently have the kindest thoughts about where AI is headed, but the technology is going to keep barreling forward whether we like it or not, and we all have to adapt.

So let's take a look at the smartest pieces I've seen recently about where generative AI is headed. We'll look at the "fog of AI," which is making it so very hard to make investment decisions. We'll look at the insurance industry's quandary about how to handle all the data centers being built (maybe). We'll look at lessons learned from early attempts at scaling AI, to see what separates the winners from the losers. 

But let's start with a piece that contradicts the conventional wisdom that AI agents should be treated as employees.

An article in Harvard Business Review says: 

"Leaders assume that anthropomorphizing AI will make the technology feel less foreign to workers or that it will signal the company’s AI ambitions to investors, customers, or internal stakeholders. But it turns out that treating AI as an employee is not so straightforward. 

"In a randomized experiment, we found that humanizing AI can shift accountability away from individuals, increase escalation, reduce review quality, and erode professional identity and trust. What’s more, it doesn’t meaningfully increase people’s intent to adopt the technology and integrate it into workflows—which remain the key obstacle to capturing AI’s enormous value creation promise."

The most striking findings to me were that AIs treated as an employee, rather than a tool, were more likely to lead to humans sloughing off responsibility for any problems that occurred and to more often asking their managers for additional review. The article doesn't argue for slowing down implementation of AI, by any means, but does make a case for changing how many of us think about describing their role.

Another HBR article, titled "The Future Is Shrouded in an AI Fog," offers some comfort for those of us confused about how to proceed with implementing AI. The piece says we pretty much have to be paralyzed by indecision because of the "extreme opacity" about the future of AI:

"Given all the things that might change because of AI, it feels like a fog has descended that occludes our ability to see the future. And right now, that’s its most important—and perhaps most underappreciated—economic effect.... This extreme uncertainty challenges the criteria we use to commit to forward-looking investments."

The opacity doesn't just affect businesses, either. It also hits us as individuals. The article asks, for instance, why smart kids would want to spend a decade training to be a doctor when it's not clear what being a doctor will mean in the age of AI.

Again, self-pity isn't allowed, at least not for very long. The article lays out an approach designed to help us sense change sooner and react with more agility, then tells us to get on it.

Mick Moloney of Oliver Wyman articulates a question I've heard lots of insurance executives pondering lately: How should insurers handle the hundreds of billions of dollars of data centers being built to accommodate the AI rush?

As Mick puts it:

"The six largest AI data center projects currently under construction or formally committed in the United States represent a combined investment of over $120 billion and a combined power capacity target of more than 10 gigawatts — deployed not over decades, as comparable infrastructure has always been, but over three to five years. They are being built by technology companies, AI laboratories, and private equity platforms that have never operated infrastructure at this scale. And they are being financed with instruments that did not exist eighteen months ago."

He doesn't have a silver bullet, but he does offer keen insights into how insurers should think about these six projects based on their power strategy, their financing structures and the risk management capabilities (or, more likely, the lack thereof) of the builders.

The insurance industry will be wrestling with the data center issue for years, but Mick's piece is a good start.

Finally, McKinsey published "The AI Transformation Manifesto," with a dozen observations about what separates the winners from the losers in the age of AI. For instance:

  • Technology alone doesn’t create advantage; enduring capabilities do. Who are the early winners at AI? The same companies that have been winning before by building capabilities that allow them to harness any technology effectively.... When these new capabilities are built—and they take time to build—the company accelerates its business transformation with technology and outperforms its peers. The capabilities become the competitive advantage....
  • Economic leverage points are your best focal points. Any business model has a few key economic leverage points that provide the biggest impact when improved with AI. In mining, for example, process yield and throughput is a key economic leverage point, and that’s where Freeport-McMoRan achieved game-changing impact. In automotive, supply chain integration is a key leverage point, and that’s where Toyota had its AI breakthrough. Most companies have long lists of use cases. Successful ones focus on achieving deep business transformation in the few areas that matter strategically. That’s where they double down to build AI systems....
  • Building the tech and AI muscle of your senior business leaders should be a top priority. We don’t have a single success story where senior business leaders were not in the driver’s seat. IT leaders can support the transformation, of course, but it’s business leaders who need to drive it.

Again, I don't see a silver bullet, but we're learning....

Cheers,

Paul