Google's $100B Mistake--and How to Avoid It

An embarrassing error by Google's alternative to ChatGPT knocked $100 billion off its market value--because it got ahead of itself in ways the rest of us can learn from. 

Image
Google website on laptop

Artificial intelligence is an awesome tool--if you recognize its limitations and work around them. Google didn't. And it paid dearly. 

As you may have read, Google executives gathered on Feb. 7 to tout Bard, what's known as a "generative AI," a la the more famous ChatGPT, as the future of the company. The problem: Google had launched an ad that morning bragging about Bard's ability to answer questions in ways that "can spark a child's imagination about the infinite wonders of the universe." To demonstrate, the ad showed Bard being prompted with the question, "What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?"--and then completely fabricating an answer. 

Bard's responses included the claim that the telescope took the very first pictures of "exoplanets," or planets outside of earth's solar system, which were, in fact, first photographed almost two decades ago. Oops.

The obvious error on such a high-profile effort knocked $100 billion off the market value of Alphabet (Google's parent) the next day, and the stock has continued sliding, losing roughly $100 billion more of market value since then. 

Wags on social media noted that Google could have avoided the error by, well, just Googling Bard's claims. And that's actually the approach that I recommend for the foreseeable future: Go ahead and start using ChatGPT, Bard and the other generative AIs in all sorts of ways--and many spring to mind--but be aware that they are just providing a rough draft that shouldn't see the light of day until it's vetted by a real, live human being you can trust not to just make stuff up. 

An article in Fortune says ChatGPT is already being widely deployed, despite being so new:

"Business leaders already using ChatGPT told ResumeBuilder.com that their companies already use ChatGPT for a variety of reasons, including 66% for writing code, 58% for copywriting and content creation, 57% for customer support, and 52% for meeting summaries and other documents. In the hiring process, 77% of companies using ChatGPT say they use it to help write job descriptions, 66% to draft interview requisitions, and 65% to respond to applications."

A large law firm is using ChatGPT "across its network of 43 offices to automate and enhance tasks, including contract analysis, due diligence and regulatory compliance." And I can imagine plenty of similar uses in insurance: pulling together files for underwriters, preparing claims reports, helping monitor compliance and so on. 

But that Fortune article also quotes the CEO of OpenAI, the developer of ChatGPT, as saying it shouldn't be relied on for "anything important," and I certainly would err on the side of caution for now--as Google surely wishes it had.

The issue with large language models like those used for generative AIs like ChatGPT and Bard is that they don't know much about the real world. They've just been fed unimaginable amounts of text and learned to imitate it. You give one a prompt, and it figures out what word is most likely to go next and then next after that and after that... and on and on and on. The results are scarily impressive but have a tenuous relationship with reality, which is why Bard claimed that the James Webb telescope discovered exoplanets, why ChatGPT has claimed that the most-cited medical journal article of all time is a piece that doesn't actually exist, why ChatGPT told a friend that he was married to a number of women he'd never met, had children he'd never had and wrote books that didn't exist. 

It's certainly possible to connect generative AIs to, say, the universe of Google and give it access to a wide array of facts, but that creates its own problems, as Microsoft learned in 2016 when an earlier AI became a racist pig in a few days after scooping up all kinds of garbage online. Those risks should get innovated away in time -- my old friend Andy Kessler makes a compelling case in a recent column in the Wall Street Journal. But, for now, it's safer to constrain these AIs to a discrete set of data, such as is available to an underwriter or claims agent. 

It's also important to see the results from these generative AIs as what they are: a very rough draft. 

Now, as someone who has spent decades doing his thinking with his fingers on a keyboard, I can tell you that even a very rough draft can be extremely valuable. Perhaps the key insight from one of the best books on how to write, "Bird by Bird," by Anne Lamott, is that you have to allow yourself to write crappy first drafts. (She uses a more colorful term.) But professional writers typically find that very hard to do. They're editing as they're writing and can't get out of their own heads long enough to let the words just flow. I tell people that 90% of my time writing is spent not writing -- but the house sure gets clean when I have a big deadline coming up. Something like ChatGPT can address that problem, because it does a great job of organizing a set of facts in a coherent flow. At that point, the writer can look at it and say, "Like this, hate that, let's move this around a bit," and so on. In fact, having AI do the first draft clears the way for another of Lamott's key insights, that you have to be willing to "kill your children." (I told you she was colorful.) She means you have to be willing to slash away even at words you've slaved over and fallen in love with. And it's a lot easier to be ruthless about cutting something you haven't written.

The writer still has to provide the insight, the personality or whatever else a piece requires, but generative AI can take a big chunk out of a part of the process that I, at least, find painful. And the same should be true for many preparing reports and doing other kinds of work in any number of fields.

This kind of collaboration between AI and human is already done in other fields. For instance, AI is often used to screen mammograms before a radiologist reviews them. The AI can spot abnormalities so tiny that a radiologist might miss them and can also let the radiologist know which areas to zoom in on and which mammograms to focus on. The radiologist makes the call but gets a big assist from the AI.

Quantum computing also shows how powerful an unsteady technology can be when combined with error correction. Qubits only stay in a quantum state for a fraction of a second and are more likely to produce errors as they fall out of that quantum state, But researchers are finding ways to correct those errors on the fly and use conventional computers to verify results, letting quantum computing advance at startling speed.

My mantra, as always, is, Think Big, Start Small, Learn Fast. Just be doubly sure in the case of generative AI that you do your learning in private, not in public, as Google did. 

Cheers, 

Paul

P.S. My favorite example of "killing your children" is the short story, "The Swimmer," by John Cheever. It began as a full-length novel, but he killed so many of his words and ideas that it became a 3,500-word story that is regarded as perhaps his best. I have no idea how he had the mental fortitude to do that, but I respect the effort mightily.