
The promise is seductive in its simplicity: AI takes over the routine work, you get time for the work that really matters. Fewer meetings to summarize, fewer standard documents to prepare, less debugging, more strategy, more creativity, more impact.
But what if the exact opposite happens?
Today Harvard Business Review publishesan investigationthat challenges that promise head-on. Researchers Aruna Ranganathan and Xingqi Maggie Ye from UC Berkeley followed for eight months how generative AI changed daily work at an American technology company of around 200 employees. Their conclusion is sobering: AI tools don't reduce work, they intensify it. Employees worked faster, took on more tasks, and extended their workday to more hours. Not because we had to, but because AI made “doing more” possible, accessible, and even intrinsically rewarding.
This is not a theoretical risk. It is a pattern that is now playing out in knowledge organizations worldwide and it deserves more attention than it gets.
The three forms of work intensification
The Berkeley study identifies three mechanisms through which AI does not make work easier, but makes it harder. They are recognizable to anyone who works with AI tools on a daily basis.
Task expansion: everyone does everything
Because AI fills knowledge gaps, employees are increasingly doing work that previously belonged to others. Product managers start writing code. Researchers take on engineering tasks. People try to postpone or avoid tasks that they would previously outsource. AI makes those tasks newly accessible - it feels like "just trying" - but those experiments accumulate to significantly broaden the range of tasks.
The side effect is insidious. Engineers then spend more time reviewing, correcting and supervising the AI-enabled work of colleagues. They coach colleagues who are "vibecoding" and finish half-finished pull requests. This supervision occurs informally – in Slack threads, during quick questions at the desk – and unknowingly adds work to their plate.
In fact, employees absorb work for which additional capacity or headcount would previously have been justified. The organization gets more output without more people - but the burden shifts, invisibly, to the existing employees.
Blurring of boundaries: work creeps into everything
Because AI so drastically lowers the threshold for starting a task - no more blank page, no unknown starting point - work slips into moments that used to be breaks. Employees prompt AI at lunch, in meetings, while waiting for a file to load. Some send a "final prompt" just before they leave their desk, so the AI can work while they're away.
Those actions rarely feel like more work. But cumulatively they produce a workday with fewer natural breaks and a continuous engagement with work. The conversational style of prompting reinforces this: typing a line into an AI system feels more like chatting than formal work, allowing work to slide effortlessly into the evening or early morning.
Several employees realized - often in retrospect - that prompting during breaks had become so habitual that downtime no longer provided the recovery it once did. In the words of the researchers, work no longer feels limited but "ambient" - something that can always be taken just a little further.
More multitasking: the illusion of momentum
AI introduces a new workflow in which employees manage multiple active threads simultaneously: manually writing code while AI generates an alternate version, running multiple agents in parallel, or picking up long-delayed tasks because AI can "handle them in the background." They do this partly because they feel like they have a "partner."
That sense of partnership creates momentum - but the reality is constant switching of attention, frequent checking of AI output, and a growing number of open tasks. What remains is cognitive load and the feeling of always juggling, even when the work feels productive.
Over time, this rhythm increases expectations around speed - not necessarily through explicit demands, but through what becomes visible and normalized in daily work. As one engineer summed it up:
"You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less. But then really, you don't work less. You just work the same amount or even more."
The self-reinforcing wheel
What makes the Berkeley study so valuable is that it does not present three separate observations, but describes a system. AI accelerates tasks, which increases expectations around speed. Higher speed makes employees more dependent on AI. Greater dependency broadens what they try, and a broader range of tasks increases the amount and density of work.
The result is a self-reinforcing wheel: employees feel more productive, but not less busy - in many cases busier than before. And because the extra effort is voluntary and often experienced as enjoyable experimentation, it is easy for managers to overlook how much extra burden employees are carrying.
This is exactly why it is dangerous. What looks like higher productivity in the short term can mask invisible workload creep and growing cognitive tension in the longer term. Overload leads to impaired judgment, more errors, and the inability to distinguish real productivity gains from unsustainable intensity. For employees, the cumulative effect is fatigue, burnout, and the growing feeling that work is becoming increasingly difficult to let go.
Not an incident but a pattern
The Berkeley study is not an isolated case. A growing corpus of data points in the same direction.
Upwork's research of 2,500 professionals found that 77% say AI tools have reduced their productivity and increased their workload in at least one way. Not less, but more. 39% spend more time reviewing AI-generated content, and 21% are simply asked to do more. Meanwhile, executives structurally overestimate the AI proficiency of their teams: 37% of C-suite leaders think their workforce is “highly skilled” with AI, while only 17% of employees self-report this.
Microsoft's Work Trend Index 2025 reported a 42% increase in "digital exhaustion." Not by AI itself, but by tool sprawl and unclear workflows - exactly the environment in which AI intensification hits the hardest.
And research from Adecco and BCG shows a telling pattern: only 21-27% of employees use the time gained by AI for their personal lives. The vast majority invest it in increasing the volume and quality of their professional output. Parkinson's Law - work expands to fill available time - is alive and well in the AI age.
What Organizations Can Do: The “AI Practice”
The researchers do not advocate less AI use. They advocate for more deliberate AI use – what they call an “AI practice”: a set of intentional norms and routines that structure how AI is deployed, when it is appropriate to stop, and how work should and should not expand in response to new capabilities.
Three concrete guidelines deserve attention:
Intentional pauses.Not as a luxury but as a structural intervention. A "decision pause" before an important decision is made - formulating one counterargument, making one explicit link to organizational goals - is enough to increase the breadth of attention just enough. This does not slow down the work; it prevents the silent accumulation of overload that occurs when acceleration continues unchecked.
Sequencing.Rather than reacting to each AI-generated output as it appears, the researchers encourage work to proceed in coherent phases. Bundle non-urgent notifications, hold updates to natural breakpoints, and protect focus windows where employees are shielded from interruptions. Less fragmentation, fewer costly context switches, retention of attention.
Human anchoring.As AI enables more solo, independent work, organizations must consciously protect time and space for listening and human connection. Short check-ins, shared reflection moments, structured dialogue - they interrupt the continuous solo interaction with AI tools and restore perspective. AI offers a single, synthesized perspective. Creative insight requires exposure to multiple human points of view.
The lesson for Dutch knowledge organizations
For the Dutch context, there are a few specific points of attention that we would like to emphasize at Thinka gain.
The print is double-sided.On the one hand, there are organizations pushing AI adoption and flooding employees with tools and expectations. On the other hand, there are employees who lift themselves up without inhibition with AI. Not because you have to, but because you can. Both dynamics lead to the same point: unsustainable intensity. The answer lies in the middle: use AI consciously, with agreements about scope, boundaries and expectations.
Vibecoding creates hidden work for seniors.The HBR research confirms what many experienced engineers already feel: when colleagues without in-depth technical knowledge start generating code with AI, the burden shifts from creation to review. Someone has to review, correct, and integrate that code. Without clear agreements, the senior engineer becomes the silent safety net of every AI experiment in the organization.
“Doing more” is not always “doing better.”The most tempting pitfall of AI is that it rewards quantity over quality. More documents, more analyses, more communication - but not necessarily better decisions. The organizations that use AI most sustainably are not those that produce the most, but those that consciously invest the freed up time in deeper thinking, better decision-making and strategic reflection.
Conclusion: the question that every organization must ask itself
The Harvard study concludes with an observation that asks exactly the right question:
"The question facing organizations is not whether AI will change work, but whether they will actively shape that change -or let it quietly shape them."
That's the gist. AI makes it easier to do more - but harder to stop. Without conscious choices about how, when and for what AI is used, the natural tendency of AI-enabled work is not to enlighten but to intensify. With all its consequences for burnout, decision quality and sustainable employability.
The productivity promise of AI is not false. But it will only be met if organizations have the discipline not to automatically reinvest profits in more output, but in better work. At rest. In reflection. In the kind of thinking that does not involve AI.
That is, ultimately, the paradox of productivity: the real profit is not in what you do extra, but in what you consciously do not do.
