When AI Makes Work Faster, Work Often Gets Harder

AI is showing up in more places than most leaders expected. It's not just automation and analytics; it's writing, planning, synthesis, and decision prep—work many of us quietly equate with expertise.

And that's where the story gets complicated.

Because the biggest barrier to AI adoption isn't whether the model is "good enough." It's whether people feel safe, capable, and in control as the work around them changes.

The efficiency trap: removing friction can remove "breathers"

A Wall Street Journal piece by Callum Borchers landed a point I keep coming back to: when AI eliminates mundane tasks, we can also erase the mental downtime that fuels creativity. The article describes how "busywork" can function as white space, those low-intensity moments where ideas show up, and warns that optimizing away boredom can come with an unexpected cost.

I'm a strong believer in AI's ability to streamline busywork. But friction isn't always waste.

Some of my best thinking happens in the whitespace: the walk between meetings, the quiet 20 minutes after finishing a deck, the pause before the next call. If every saved minute instantly gets refilled, we haven't saved time… we've increased pressure.

The risk: if we aren't careful, we'll turn our most creative thinkers into high-speed reviewers, without leaving room to actually think.

The Berkeley signal: AI doesn't just accelerate work, it can intensify it

This isn't only philosophical. Research coming out of UC Berkeley Haas echoes the same pattern: AI can intensify work by expanding the range of tasks people take on, blurring boundaries between work and non-work, and increasing multitasking.

In practice, this can look like:

  • "Time saved" getting reinvested into more throughput.
  • More cross-functional stretching (sometimes empowering, sometimes exhausting).
  • Higher expectations for speed, creating a new baseline that quietly resets what "good" looks like.

This is the part leaders often miss: even when people like the tool, the system around the tool can drive workload creep.

Why AI feels threatening: it touches identity, not just tasks

That's why I'm grateful for the framing in HBR's "Why Gen AI Feels So Threatening to Workers" by Erik Hermann, Stefano Puntoni, and Carey K. Morewedge. Their core idea is simple and human: adoption depends on whether AI supports or frustrates three psychological needs: competence, autonomy, and relatedness.

  • Competence: Am I still good at what I do?
  • Autonomy: Do I still have agency over how my work gets done?
  • Relatedness: Do I still belong here?

You don't resolve those questions with a training module. You resolve them by treating these needs as design requirements, not "soft stuff."

A practical structure for leaders: AWARE

One reason this HBR article stands out is that it doesn't stop at diagnosis. A summary from Boston University's Digital Business Institute highlights the authors' AWARE framework, a set of leader moves to acknowledge concerns, watch coping behaviors, align support systems, redesign workflows around human–AI synergies, and empower workers through transparency and inclusion.

As we keep evolving on AI adoption, it's good to see structure like this come together, because the people impact is often what determines whether the technical rollout succeeds.

The shift: AI adoption as work redesign

AI adoption starts to stick when we stop treating it like a software launch and start treating it like work redesign.

Software launch mindset

  • Success = tool adoption and usage
  • Focus = seats, licenses, training completion
  • Risk = technical friction

Work redesign mindset

  • Success = role clarity and value creation
  • Focus = decision quality, collaboration, accountability, pace
  • Risk = psychological burnout (if expectations reset silently)

How this aligns with Evaila's people-focused AI adoption approach

At Evaila, our AI Adoption Framework builds around the workforce and how people work with AI:

  1. Start with the work, not the tool. Map workflows to identify where AI reduces low-value effort and where it might erase whitespace.
  2. Redesign roles with the people doing the job. Adoption rises when people have agency in shaping how AI changes their day-to-day.
  3. Build competence as a capability. Give people time to practice, share prompts, and calibrate what "good" looks like together.
  4. Protect autonomy with guardrails. Establish clear policies and quality standards while leaving room for judgment and creativity.
  5. Measure human outcomes alongside business outcomes. Track burnout risk and deep-work time as closely as productivity.

A human-centric scaling checklist

If you're scaling AI, put these questions on the same slide as your technical roadmap:

  • Competence: Are we rewarding the learning curve, or only output speed?
  • Autonomy: Where do teams need "AI-optional" zones to protect human-led deep work?
  • Relatedness: Are we reinforcing community and shared standards so people don't feel like isolated operators?
  • Whitespace: Have we explicitly redefined "full capacity" to include buffer time for recovery and synthesis?

The takeaway

AI will absolutely change productivity. But its most immediate impact is often psychological: how people experience their value, control, and belonging, and whether the workday becomes nonstop intensity.

If we want AI adoption that lasts, we have to build for those needs on purpose.

Workforce readiness is one of four dimensions that determine whether AI adoption succeeds or stalls. For a structured way to evaluate where your organization stands, see our Complete Guide to AI Readiness for Business Leaders.

Published On

February 13, 2026

Author

Emily Lewis-Pinnell