We discuss how he actually uses AI day to day, how he thinks about the security and privilege considerations, and what happens to the billable hour when you scale your work with AI.
Side note: for a demo of Claude on legal use cases, watch this LinkedIn Live recording I posted last week.
Introducing Zack
Zack Shapiro went to Yale Law School thinking he’d become an academic. If not law, it would have been a philosophy PhD. He eventually decided that Yale Law offered the same intellectual life with better job security and less time in training.
After law school came a year at Davis Polk, where his timing coincided with the ICO boom. He landed some of the firm’s earliest crypto work. Two federal clerkships followed, first with Judge Engelmeyer on the Southern District of New York, then Judge Lynch on the Second Circuit.
Still not sure he wanted to practise in BigLaw, he joined a friend’s e-commerce startup, BZR, as an operational co-founder. They raised money from Founders Fund, Greycroft, and Abstract Ventures, and then the business got acqui-hired in 2020. The lesson Zack took away surprised him: he never wanted to be a startup founder again. What he enjoyed was being a startup lawyer.
That year he launched a solo practice that grew into Rains. They’ve now advised over 200 clients across corporate law, venture financings, digital asset regulation, and increasingly, AI. Zack also serves as Head of Policy at the Bitcoin Policy Institute.
The post that broke legal Twitter
Before we get to how Zack uses AI, we need to talk about what brought him to most people’s attention: a post on X that hit 7.7 million views. I’m not sure a legal technology post has ever reached that level of virality?
Zack had been experimenting with X’s long-form articles feature. He’d already written two, one on the concept of the “AI centaur” borrowed from the chess world, another on what AI means for intrapreneurs inside larger organisations. Both did reasonably well - but nothing compared to the third. It laid out how he uses Claude as a practising lawyer.
He shares the moment he knew it had gone viral. At around 10,000 views, the notifications started going haywire. He describes it like a slot machine hitting the jackpot. He couldn’t do anything for the next two hours but watch the notification pings come in. Luckily, it was a Friday afternoon.
The piece generated a huge amount of debate and the comments kept rolling in on X and LinkedIn. Some praised it as a practical roadmap; others dismissed it as “productivity theatre” or questioned whether Claude has the enterprise features needed for BigLaw. Either way, it got people talking.
The lesson Zack took from it was that people want the specific, practical examples of what AI looks like in real legal work. And we got into some of that in the discussion.
Why Claude?
Zack points to two features of Claude that he thinks make the difference.
First, Claude can write code on the fly. Before this, he’d use ChatGPT to help think through contract edits, but the best it could produce was a list of redlines he’d then manually apply in Word. The formatting would invariably break. With Claude, he found a way to get it to manipulate documents directly, which he describes as XML under the hood, published as a Word doc with tracked changes.
Second, he found that Claude can create and work with local files. In his view, this addresses the context window limitation that degrades long-running conversations. Instead of relying on the model’s memory and context window, Zack also stores context in markdown files on his computer, effectively creating external memory that can be referenced as needed.
He’s also a huge fan of Skills, the open standard that I’ve written about previously and recommend that all law firms should be experimenting with. TLDR, Skills are simple, human-readable files that explain to an agent how to tackle a particular task. Zack describes it as a zip file you could send to 500 associates, your judgment encoded as a skill file that scales like software.
The Secret Sauce (public version)
Zack sees a clear split online between people who say AI has given them superpowers and people who think the whole thing is productivity theatre. He thinks the gap comes down to two things.
Disciplined input. The model is a fuzzy tool. Fuzzy input produces fuzzy output. Precise, detailed instructions produce much better results. He argues that most legal AI companies are focused on the wrong problem, training models or using variations of RAG for contract and brief templates. In his view, the training data already contains more of those than anyone could need. The bottleneck is the prompting and the context. The good news for lawyers: precision and specificity are skills the profession already selects for.
Reinforcement over time. Once you’ve built up enough back-and-forth with the model, you encode what works into skills. When it does something well, you reinforce it. When it does something poorly, you update the skill. The usefulness compounds. It’s a compelling idea, though it does require a level of discipline and iteration and one wonders if every practitioner will have the time or inclination for that, unless it happens automatically in the background.
A day in the life
Email is still where the work arrives. But the “substantive lawyering” now happens inside Claude.
An engagement letter used to mean opening Word, editing the scope, swapping in the client name and retainer amount. Now it’s a one-sentence instruction: engagement letter for this company, addressed to this person, here’s the retainer, standard scope. The letter comes out the other end.
He’s also built a custom tool combining Claude Code with ElevenLabs that reads long documents aloud, which is helpful as Zack has a health condition making it hard to read longer documents on screen. But for working with Claude itself, he types. Long prompts, often 2,000 words, written like essays. He finds that typing without worrying about grammar or spelling is faster than voice, and being redundant about the things that matter is a feature, not a bug.
Zack says the drudgery is gone and the work feels more joyful.
On vibecoding and the future of legal deliverables
Long-time readers will know I’m big on vibecoding. I asked Zack whether he’s started vibecoding things as a way of delivering advice. Dashboards, interactive maps, visual tools. His answer was no.
He’s sceptical of anything that intermediates between a client’s intent and the lawyer’s delivery. Take the classic 50-state regulatory review. Version one is the memo. Version two might be an interactive visual. But Zack’s thinking about version three: what if you deliver the answer in the format the client actually needs? Not a memo about sales tax rules, but the code to make their sales engine compliant across all 50 states.
It’s an interesting provocation, though it raises its own questions about where legal advice ends and software engineering begins, and who’s responsible when the code is wrong.
On security
Information security is probably the question Zack gets most in X threads.
On privilege: he thinks it’s easier than people assume. Many of the negative reactions to his article cited the Heppner case, the February 2026 ruling from Judge Rakoff in the Southern District of New York. But Zack argues that case is distinguishable. In Heppner, a criminal defendant used a consumer version of Claude, on his own initiative and not at counsel’s direction, to research legal strategy. The privacy policy allowed training on inputs. Judge Rakoff found no reasonable expectation of confidentiality and no privilege. A law firm using an enterprise AI tool with training turned off, generating attorney work product at counsel’s direction, is a different posture in Zack’s view. Whether the courts will draw that line clearly remains to be seen; Judge Rakoff himself noted that the analysis “might differ” if counsel had directed the AI use.
On data confidentiality: more nuanced, and requiring case-by-case judgment. The spectrum runs from cloud-hosted with zero data retention, through custom DPAs, local inference, and encrypted AI, to simply not putting certain data into any model.
Zack reserves his sharpest words for some legal AI vendors, who he sees as “selling fear”. He believes there are companies pushing expensive platforms with checkbox workflows that, in his view, ultimately aim to automate away the lawyers buying them. He’d rather lawyers engage with the ethical rules and the technology directly and build things themselves. Not everyone will agree; some firms will conclude that a managed platform is the most practical way to meet their compliance obligations, but Zack believes that is more fear and hype than reality.
Pricing in a post-AI world
Zack tells me that Rains charges hourly rates at roughly half the cost of Big Law, with overall service costs landing at about a quarter, the additional reduction coming from AI-driven efficiency. Most clients are on subscriptions denominated in a cap of human hours but calculated to be functionally all-you-can-eat. The long-term goal is flat subscriptions, but the technology isn’t reliable enough yet to remove the human-attention safeguard.
The tension Zack identifies is that the value of the work product is becoming untethered from the hours spent producing it, but the capacity to exercise judgment is still measured in human time. Overextending means falling into the temptation of not checking the AI’s output.
Scaling through Claude, not headcount
Rains already runs multiple Claude chat and Cowork sessions in parallel (all on screen for now!) In his opinion, one lawyer plus Claude can replace a partner plus a team of associates on certain matters.
But taking on more clients doesn’t scale the same way, because each one requires human judgment. To grow that side, he’d need to hire lawyers who use AI the way he does. That’s a small pool right now.
He’s thinking about what comes next: training for larger firms, forward deployment into in-house teams, possibly selling his agentic workflow to a tech company. He sees two possible futures for the profession. One where everyone ends up inside opinionated platforms and everything becomes a process. Another where lawyers use AI directly and the profession opens up.
On venture capital and AI law firms
Y Combinator’s latest batch included two AI-native law firms: General Legal, and LegalOS, and another legal service platform, Arcline. VC money is flowing into the space more broadly. (Take a look at my list of AI law firms here.) For now at least, Zack isn’t rushing to take any.
His concern is that the incentives of a venture-backed AI law firm push towards automating everything, including the judgment, and delivering what he bluntly believes is slop. In his view, you need good lawyers doing the lawyering, with automation built around that. He points to Atrium, the hybrid law firm and legal tech company that raised $75 million before imploding in 2020, as a cautionary tale.
The tools are already here, he argues. He doesn’t need $60 million to keep building skills. But he’s open to conversations.
How to get involved
Zack is open to conversations with Big Law managing partners, in-house leaders, and tech companies thinking about the future of legal work. Reach him on X at @ZackBShapiro or email info@rains.law.
For a live demo of Claude on legal use cases, watch this LinkedIn Live recording I posted last week.
Links
Rains (rains.law)
Zack Shapiro on X (@ZackBShapiro)











