As of early 2026, artificial intelligence (AI) has made only a modest dent in the daily practice of law. Adoption is rising, but cautiously. Many lawyers still avoid AI altogether; others limit its use to narrow, low-risk applications. This restraint sits uneasily alongside the predictions that have circulated since early 2023, when evidence emerged that ChatGPT could earn passing grades on law school exams—and even on the bar exam.
The gap between promise and practice has fed a familiar narrative: if AI were truly transformative, law firms would already look different. Because they largely do not, skeptics argue that AI's disruptive potential has been oversold.
That conclusion gets the timing wrong — and misunderstands what disruption in law actually looks like.
From the beginning, strong performance on exams was a poor proxy for real-world impact. Legal practice is governed by ethical obligations, professional judgment, and client risk—not multiple-choice questions. Lawyers must closely review AI-generated output, especially given well-documented risks of hallucinations and subtle errors. In many contexts, reviewing and correcting AI-assisted work has been slower than producing it directly, or has resulted in lower-quality outcomes than human-only work—particularly when the lawyers involved are highly skilled or when precision matters more than cost savings.
The problem, in short, was not overhype. It was the wrong benchmark.
The question that actually matters is not whether AI can perform legal tasks on its own, but whether lawyers using AI outperform comparable lawyers who do not. Until recently, the answer to that question was unclear at best. Now, emerging evidence suggests it is beginning to turn decisively in AI's favor.
In a recent study, my colleagues and I reported the first randomized controlled trial evaluating two AI innovations with direct implications for legal work. The first is Retrieval-Augmented Generation (RAG), which grounds AI outputs in authoritative legal sources rather than free-form text. The second is the rise of AI reasoning models that structure complex analysis before generating responses.
In the study, upper-level law students were randomly assigned to complete realistic legal tasks using either a RAG-enabled legal AI system, an AI reasoning model, or no AI assistance at all. The results mark a clear break from earlier studies focused on prior generations of large language models. Across multiple tasks, participants using modern AI tools produced meaningfully higher-quality legal work. They also worked much faster. In five of six tasks, productivity increased by between 50% and 130%.
This evidence reframes the AI debate in law. The story is no longer about machines replacing lawyers—or about AI's ability to ace exams. It is about augmentation that finally works: tools that allow lawyers to do better work, in less time, without sacrificing professional standards.
That shift carries real consequences for legal institutions.
Firms that continue to treat AI as an experimental add-on or a compliance risk may soon find themselves at a competitive disadvantage. If AI-enabled lawyers can reliably produce higher-quality work more efficiently, then billing models, staffing decisions, training pipelines, and even partner expectations will come under pressure. Early adopters will not simply save time; they will set new baselines for quality and speed that others will be forced to match.
The implication is clear. The window for cautious observation is closing. In 2026, the strategic question for law firms is no longer whether AI will meaningfully affect legal practice, but how quickly—and whether they will shape that transition or be shaped by it.
