Are Agents Burning Us Out?
The human context window is being stretched to the limit by agents moving faster than we can think or track work. How should legal teams respond?
In 1999, Bill Gates wrote Business at the Speed of Thought, arguing that information should flow through an organisation as naturally and quickly as thought itself. Twenty-seven years later, AI has delivered on that promise. The problem is that information is now starting to move faster than the humans who need to make sense of it.
A new study from UC Berkeley and published by Harvard Business Review gives us a glimpse of where things are heading. Researchers Aruna Ranganathan and Xingqi Maggie Ye spent eight months inside a 200-employee tech company studying how AI agents actually affect the people using them. Their finding was interesting: AI doesn’t necessarily lighten workloads. It increases the intensity of work.
This matters for legal because our industry is about to adopt AI and agentic teammates at a pace that will make the last two years look like a slow start. And legal work, with its long hours, high stakes, and culture of availability, may be uniquely susceptible to exactly the kind of intensification this research documents.
What did the researchers find?
The Berkeley researchers documented three patterns of work intensification:
Task expansion. Workers absorbed jobs previously done by others or outsourced. Product managers started writing code. Researchers took on engineering tasks they’d never touched.
Dissolved boundaries. Work bled into lunch breaks, evenings, and early mornings, because AI chat feels like conversation rather than work, so it can happen anytime, anywhere. A lot of these tools also have a strong dopamine loop where you get instant output from a prompt.
Constant multitasking. Employees juggled multiple AI-mediated tasks simultaneously, bouncing between agent sessions during meetings, while waiting for files, between calls. Claude Code fans, think about those moments when Claude is “Combobulating”. What do you do during these “Discombobulation Breaks”? It seems most of us context switch continuously.
The consequences were predictable: cognitive fatigue, burnout, declining quality, and turnover.

What does this mean for lawyers?
We’re already seeing this play out in legal. As AI makes tasks like contract review, due diligence, and legal research faster and cheaper, lawyers aren’t really doing “less work”. They’re becoming responsible for supervising an ever-growing volume of outputs, tasks, and risks.
Categories of agreement that were previously excluded from DD reviews can now be included. SaaS agreement reviews can be outsourced at scale with agreed SLAs. Clients expect continuous real-time regulatory alerts, not a quarterly risk review.
Where a lawyer once deeply owned a handful of matters, they might now oversee a portfolio of a hundred mini-tasks and sub-matters handled by machines and human helpers. The level of involvement changes. The role becomes more supervisory. But the net result is that lawyers touch more things, more often, with less time and depth for each one.
Meanwhile, expectations for output quality stay constant or increase. In addition to increasing expectations around rapid turnaround times, it raises the fidelity bar. Work must be fast and good.
Why the human context window is struggling
To state the obvious, humans have cognitive capacity limits. AI agents expand what’s possible to do. They don’t inherently expand human capacity to track everything, evaluate everything, organise everything.
Cognitive psychologist George Miller’s research established that human working memory can hold approximately 5 to 9 unrelated items at once, a finding widely known as the “7 ± 2” rule. More recent studies suggest the true capacity may be closer to 3 or 4 items. Atul Gawande made a similar observation in The Checklist Manifesto (this is a great book btw), arguing that the complexity of modern knowledge work has already exceeded what a single professional can reliably manage within their working memory and attention span.
Right now, most lawyers can keep the status of their matters in their head. A client calls on a Friday wanting to know where things stand across their portfolio, and the lawyer can synthesise a summary blending the big picture with key details.
But as each lawyer becomes personally responsible for more matters, contracts, jurisdictions, and clients, this gets harder. It’s like reading a hundred novels in parallel, one paragraph at a time, and being asked to keep track of what’s happening in each.
Consider the associate who once reviewed one complex agreement per day. With AI document review, that same associate might now handle thirty. The AI flags issues, generates summaries, handles the mechanical work. But the human still evaluates context, assesses risk, and makes judgement calls.
The effects then start to compound, with more context switching as lawyers bounce between quick AI-assisted tasks. Continuity suffers because each matter carries its own logic, timeline, and risks. Decision fatigue sets in as work becomes a rapid series of small calls: edit this clause, approve that draft, escalate that risk, fine-tune that agent.
So what do we do about it?
Ranganathan and Ye propose what they call “AI Practice,” structured norms that protect humans from intensification. Their recommendations seem sensible and include:
Build in pauses after AI-assisted sprints
Resist the urge to parallel process everything
Keep humans connected to humans for quality control and peer review
Set boundaries on agent use outside work hours.
Beyond those principles, I think two other things are worth acting on now.
Get status tracking out of your head
If AI is going to multiply the number of matters, tasks, and decisions a single lawyer is responsible for, we need systems that externalise the tracking. We just can’t keep it all in our heads.
The lawyer who can see in one place who is doing what, what’s been done, and what needs to happen next is in a fundamentally different position from the one trying to hold it all in working memory.
Full disclosure: this is exactly what we’ve built at Lupl, so I’m biased. But the broader point stands and it’s one reason task and project management in legal is having its moment.
Take a “Discombobulation Break”
OK this isn’t a real term. I just invented it. I’m talking about what you do when your AI agent is off doing its thing, which can be anywhere from 30 seconds to 30 minutes or longer.
With mixed results, what I’m personally trying to do is resist the temptation to immediately start another task or check my phone. The temptation to fill every gap is strong. But perhaps the most productive thing we can do while the agent is working is…nothing at all? (Or maybe a quick walk and some fresh air!)
Final Thoughts
None of this is to say AI is a bad thing. I’m bullish on its potential to improve outcomes in our industry. But it is introducing a whole new paradigm for how we work, and it’s happening now, in real-time, faster than we can devise systems to adapt.
It feels to me like there’s a structural mismatch when one person, augmented with AI, is expected to increase their output or area of responsibility by ten times, because it still takes one human brain to understand and take responsibility for the overall outcome.
Ranganathan and Ye have provided an early piece of evidence. It’s now up to us to figure out how to adapt to this new normal.
Source: This post draws from “AI Doesn’t Reduce Work, It Intensifies It” by Aruna Ranganathan and Xingqi Maggie Ye of UC Berkeley, published in Harvard Business Review, February 2026.



