Why I think AI will kill BigLaw
I‘ve been asked to expand on my tweet - why do I believe that BigLaw as a concept will not survive the arrival of powerful AI?
Why Hire BigLaw?
I’d say there are three main reasons people hire a BigLaw firm:
The client needs customized or specialized advice about something. For example, the client’s in-house lawyers might know how to draft a simple vendor agreement, but not how to navigate a complex M&A deal with tax considerations, regulatory implications, complicated transaction mechanics, etc.
The client needs someone to do a large amount of legal work, potentially under a tight time frame. For example, a team of a few in-house lawyers would not be able to review thousands of documents dumped into a data room on a Friday in a timely manner - but a BigLaw firm will staff a dozen associates on this if needed, and they’ll work around the clock to finish the task by any deadline, no matter how unreasonable.
The client needs advice in a matter that is high-stakes, or needs to be blessed by competent outside counsel, or involves a counterparty that is using another BigLaw firm. Big M&A transactions, bet-the-company litigation, internal investigations, sensitive matters requiring establishment of a special committee all fall within this category.
Often, a matter will meet more than one of these criteria (and potentially will meet all three).
The BigLaw model involves one key partner (or a few key partners) providing mostly strategic advice to the client, plus a team of associates supporting the partners’ work. The associates do the research and draft a legal memo, a junior partner reviews it, the final work product goes to the senior partner who glances at it and maybe distills it down to a few talking points for the client’s GC (the GC will not read the memo, but will listen to these talking points).
How does AI impact all this?
Category 1 (specialized advice): GPT-5.x Pro knows the tax laws and the regulatory implications. I also think we’re not far away from an AI harness that would enable a SOTA AI model to succinctly (and fairly quickly) summarize all legal implications of a particular fact pattern (e.g., in a legal memo) or implement them via contract language. LLMs would probably need to learn how to “write like a lawyer” a bit better in order to achieve this, but I don’t view this obstacle as being insurmountable. The remaining human role in this process: (1) verifying the LLM’s output, (2) providing high-level strategic advice, and (3) going “beyond the law” to things like unwritten regulatory requirements, market practices, etc. All of this can be done by a team of senior-partner-level people; associates are not required.
Category 2 (high-volume work): AI works much faster than humans, doesn’t need to take breaks, doesn’t sleep, can work for many hours at a time, ‘nuff said. The remaining human role in this process: verifying the LLM’s input by doing things like double-checking summaries of key documents, reviewing a sampling of documents to make sure the human reviewer agrees with the LLM’s conclusion, etc. This can be done by a small team of in-house counsel, with maybe some input from senior-partner-level people in a law firm.
Category 3 (high-profile work): This is where BigLaw firms will continue to dominate regardless of AI... at least for a while. Yet, slowly but surely, the law firms’ work will be eroded. No, don’t do the diligence; do only specified spot-checking and a high-level review of our (the client’s) AI’s findings. No, don’t draft the Merger Agreement; we’ll send you something we put together ourselves for review. No, don’t do the research; we’ll send you something our AI put together to double-check. Eventually, the value typically brought by BigLaw to these kinds of transactions (i.e., leveraging an army of associates to do the gruntwork) disappears, and it starts looking more and more like the client is hiring a particular partner (or team of partners) to do the entire matter. The leverage starts disappearing.
At some point, the best partners start asking themselves why they should share their profits with less successful partners - including those who have not adapted to the age of AI. And so, the senior partners will slowly start leaving to start up their own boutique law firms. The client gets a great deal: Bob Jones, formerly the top deal-maker at Cravath, is still handling the client’s work, but charges the client only X% of what Cravath would charge - and the work is faster while its quality is better. The boutique law firm consists of Bob Jones, maybe a few specialist colleagues (tax, regulatory, etc.), and a few junior lawyers, paralegals and/or IT professionals whose main job is to work with AI models to rapidly produce high-quality legal work. On the client’s side, just a few in-house lawyers now handle not only the work that a much larger in-house legal team used to do, but also a good portion of work that was formerly done by BigLaw.
There is no longer any room for BigLaw in this paradigm, and BigLaw firms start disappearing.
The timing for all this is extremely uncertain. The legal industry moves slowly. Lawyers are extremely non-technical. I’d venture to guess that 99% of lawyers today don’t know the power of GPT-5.4 Pro. This will eventually change, but how long is “eventually”? And when will the clients begin to understand, begin to truly internalize, the transformative impact that AI can have on the practice of giving legal advice?
Could be 2 years, or could be 10.1
This article also appeared on Twitter.
I expect that some people will argue that AGI will be able to do 100% of all legal work. Well, yes - that very well might be true... but when that comes to pass, we’ll probably be living under conditions of post-scarcity anyway. In other words, if AI is performing all legal work, then it is probably also performing all other economically valuable work, and there is therefore nothing left for humans to do but take up hobbies or talk to each other on Twitter all day. I hope you’ll still find me here when this new reality comes to pass!



First, 100% agreed that current models still require guidance and careful prompting. This matches my personal experience. And I agree that there will still be a big role for lawyers to play - at least for a while; I'm just not convinced that those lawyers will be employed *at BigLaw*.
Second, regarding your question: there are a few considerations to keep in mind. Lawyers are extremely non-technical, and adopting a completely new technology scares them. There is a lot of confusion in the market about which models and harnesses are the best; I couldn't live without GPT-5.x Pro, but 99% of lawyers probably haven't even heard of it. There are long-standing fears of hallucinations (which have now been almost completely solved). Some lawyers tried GPT-4 on release, realized that it sucked, and haven't realized that we're now light years beyond it. There is also a solid "nothing ever happens" component to all this: why should a boring U.S. public company adopt AI and make its outside counsel adopt AI when it wants to make a new acquisition? Finally, on the other side of the equation, there could be compute constraints (IMO, quite likely to happen although unclear in potential magnitude) that would prevent widespread use of models by the industry. It's all very messy and makes it VERY difficult to even guess at potential timelines with any kind of precision.
That's a great question. I think you're in a good place right now! After you exit law school, people won't really care about your degree that much (at least in my experience). If you do well in your first year+ in BigLaw and know how to use AI, I think you'll be positioned very well. I share your view that now is not the best time to get into a mountain of student debt if you can avoid it - both because you already have a BigLaw job and because having less debt will mean that you'll be much more flexible. Being as flexible as possible during times of great change is IMO very important!
I'm far from convinced that private practice jobs will disappear for junior attorneys - and especially those who know how to use AI. BigLaw might disappear at some point (which is the point of my article), but I think it will be replaced by more in-house jobs and jobs at smaller boutique law firms. (I could be wrong about this, of course; AI is very different from any other technology!) On balance, I'd probably stay away from working in the government unless you WANT to work in the government. That path might seem safer, but on the other hand the government could also do random layoffs as its legal functions are automated by AI, and those layoffs would be less likely to be based on merit than a layoff in a law firm (where being able to use AI could be one of the things that keeps you safe even in the worst-case scenario, assuming that you also otherwise do good work).