Artificial Intelligence at Work: Why Automation Increases Workload Instead of Reducing It
The idea of artificial intelligence as a tool that frees people from routine has long been part of corporate promises. Automatic drafts, fast summaries, help with code, or data analysis were supposed to reduce pressure, shorten the working day, and leave more time for complex, high-value tasks. However, research published by Harvard Business Review paints a different picture. In real working conditions, generative AI tools do not reduce the amount of work; instead, they gradually increase it, often without direct pressure from management.
Over the course of eight months, the researchers examined how the use of generative AI changes everyday work in a U.S. technology company with approximately 200 employees. They observed workflows, analyzed internal communication channels, and conducted more than forty in-depth interviews with employees from different teams engineers, designers, researchers, and operations staff. One important detail stands out: the company did not require employees to use AI. It simply provided access to the tools. All changes in workload emerged from the initiative of the employees themselves.
The result turned out to be unexpected for many leaders. People began working faster, taking on a broader range of tasks, and stretching their workday across more hours. AI created a sense that “more can now be done,” and this feeling gradually pushed employees toward voluntarily expanding their workload. Formally, productivity increased, but the subjective sense of being busy did not decrease. In many cases, it actually intensified.
The researchers identified three key mechanisms through which generative AI intensifies work.
The first is task expansion. Generative AI lowers the barrier to entering new types of tasks. What previously required specialized knowledge or help from colleagues now feels accessible. Product managers and designers began writing code. Researchers took on engineering tasks. Employees attempted work they would previously have outsourced, postponed, or avoided entirely. AI provided a sense of cognitive support: it suggested options, corrected mistakes, and explained problems. This felt like empowerment, but in practice it meant that a single person gradually carried more functions. This also produced secondary effects. Engineers, for example, spent more time not only on their own work but also reviewing and correcting AI-assisted outputs produced by colleagues. Much of this oversight happened informally in chats, short consultations, or comments on partially completed work. In this way, workload spread across teams, even when it looked like simple collaboration.
The second mechanism is the blurring of boundaries between work and rest. Generative AI significantly reduced the friction of starting a task. There was no longer a need to stare at a blank document or figure out where to begin. A short prompt was enough. As a result, employees began to “do a little work” during lunch, in breaks, between meetings, or at the end of the day. Some launched prompts just before stepping away from their desk so that AI could “work in the background.” These actions did not feel like full-scale work. They seemed small and almost invisible. But over time, they accumulated. Natural pauses disappeared, and work became a constant presence throughout the day. The conversational style of interacting with AI played a role as well. Typing a short instruction felt more like chatting than performing a formal work task. Because of this, work easily spilled into evenings and early mornings without a clear decision that “now I am working.”
The third mechanism is increased multitasking. AI changed the rhythm of work. Employees simultaneously wrote code while waiting for an alternative version from AI, ran several agents in parallel, or returned to long-deferred tasks because they could now be “handled” by the tool. There was a sense of having a constant assistant nearby, which created an illusion of control over a larger volume of work.
In reality, this meant constant switching of attention. People regularly checked AI outputs, corrected them, and returned to previous tasks. The number of open processes grew, along with cognitive load. Even when productivity appeared high, fatigue accumulated more quickly.
Over time, these three mechanisms began to reinforce one another. AI accelerated individual tasks, speed became the new norm, and that norm pushed employees to rely even more on AI. Increased reliance expanded the scope of work, and a broader scope demanded even greater speed. Participants in the study often said they felt more productive but not less busy. Some admitted they felt more overloaded than before using AI. In the short term, this dynamic can look like success. Teams do more, faster, without increasing headcount. But the researchers point to delayed risks. Hidden workload growth leads to cognitive fatigue, burnout, reduced decision quality, and a higher likelihood of errors. What initially feels like a productivity breakthrough can later result in turnover and declining outcomes.
For this reason, the authors emphasize that it is not enough for companies to simply introduce AI tools. A clear system of rules and habits for working with them is needed. They call this an “AI practice” a deliberate approach to how, when, and why artificial intelligence is used.Among the key recommendations are mandatory pauses. As work accelerates, it becomes important to deliberately create moments for reviewing decisions, rethinking direction, and restoring pace. Such pauses do not slow work; they prevent the quiet accumulation of overload.
The second principle is sequencing. Instead of reacting immediately to every AI output, work should be organized in stages: grouping tasks, limiting parallel processes, and protecting time windows for focused work. This reduces attention fragmentation and helps maintain a stable rhythm. The third element is preserving live interaction. As AI enables more autonomous work, the risk of isolation increases. Short discussions, shared reflection, and exchanging views with colleagues help maintain critical thinking and counterbalance the effects of constant individual work with tools.
The research shows that generative AI truly changes work, but not necessarily in the way many expected at the outset. It makes it easier to start tasks, speeds up execution, and expands individual capabilities. At the same time, without clear boundaries and rules, it makes one thing easier doing more and another much harder: knowing when to stop.












