Coder Cannibalism

Developers who automated other industries now face AI displacement themselves, as technical certifications prove less valuable than human judgment and accountability.

Woman Using a Computer

Most of my friends are coders—and, disclosure, I used to be one. Smart people. Good people. People who spent years mastering arcane syntax, memorizing AWS service catalogs, stacking certifications like frequent flyer miles, and genuinely believing—with some justification—that they were the high priests of the modern economy.

They automated the travel agents. The paralegals. The loan officers, the radiologists, the customer service reps, even the truckers—at least in theory. And they did all of it with a clear conscience because, hey, that's capitalism, baby. Creative destruction. If we can do it better, faster, cheaper, then by the immutable laws of the market, we should.

They were not wrong. And they were not unkind people. They just never believed, not really, not in their gut, that the logic had a return address.

It does.

Amazon just laid off a cohort of developers whose primary offense was building something that worked. The system they constructed—on AI, with AI, as a monument to AI—became, upon completion, the argument for their own termination. The product was the pink slip. You couldn't script a better parable. These weren't junior button-pushers. Some of them held AWS Solutions Architect certifications. Professional level. The kind of credential that used to mean something in a job interview, that used to justify a salary band, that used to make a hiring manager feel confident they were buying proven expertise.

What they were actually buying, it turns out, was structured knowledge retrieval. Which is a very polite way of saying: a human being who had memorized a lot of things and learned to pattern-match against them quickly. And if there is one thing—one single thing—that large language models do better than humans, it is exactly that. The machine doesn't need a certification. It doesn't need a salary. It doesn't get defensive when you change the requirements at 11 p.m.

So here we are. The hue and cry from the coding community is structurally identical to every argument that was dismissed when the travel agents and the paralegals and the loan officers were in the crosshairs. This is different. This requires real skill. You don't understand the complexity. 

Brother, Sister, those whose jobs you automated said the same thing. You just didn't listen because you were the one holding the compiler.

The real question—the one worth asking these days—is, what skills actually don't have a shelf life problem? Some of them seem obvious in retrospect, and most of them aren't technical.

Regulatory judgment under uncertainty is one. Not knowing what a rule says—AI can read the Federal Register faster than any human—but knowing what it means when a specific auditor in a specific regional office has been interpreting it a certain way for three years. That's pattern recognition built from exposure and consequence, not training data. A friend of mine who works in healthcare private equity says the top three risks related to any deal are regulatory in nature—gray area, subjective.

Organizational power mapping is another. Every failed technology implementation in history failed for the same reason: someone built the right thing for the wrong power structure. The CMO thinks she controls the data. The CFO controls the budget. The VP of operations controls the workflow. The IT director controls the timeline through "security review." No AI maps this. No certification covers it. This is human intelligence in the original meaning of the phrase.

Cross-domain translation may be the rarest and most durable skill of all. The ability to stand in a room and make a CMS actuary, an Epic build team, and a 55-year-old case manager all feel heard, and then synthesize what they need into something that actually ships—that's not a technical skill. It never was. We just told ourselves it was adjacent to technical skill so the coders could claim it.

And finally, accountability. The willingness to put your name on a recommendation and mean it. AI is a brilliant, tireless, unaccountable collaborator. In regulated industries—healthcare, insurance, finance, law—where the downside of being wrong is measured in dollars with a lot of zeroes or people with actual problems, someone has to own the outcome. That someone is still a human being with a name and a reputation and something to lose.

The coders who survive this aren't the ones who fight the AI. They're the ones who understand that the job was never really about the code. It's about the judgment surrounding the code. Which explains why Stanford CS grads can't find jobs—while McKinsey is hiring liberal arts majors again. Coders just got away with charging for the code because nobody had built the machine yet.

Now somebody has.


Tom Bobrowski

Profile picture for user TomBobrowski

Tom Bobrowski

Tom Bobrowski is a management consultant and writer focused on operational and marketing excellence. 

He has served as senior partner, insurance, at Skan.AI; automation advisory leader at Coforge; and head of North America for the Digital Insurer.   

MORE FROM THIS AUTHOR

Read More