9 min readBy TimBlog

CEO Analysis: You Think You're Asking "How Many People Can AI Save," But You're Actually Asking "Who Takes Responsibility for Results"

The key to enterprise AI adoption isn't about saving headcount, but redefining organizational authority, budget, performance, and risk—treating AI as new production capacity and rewriting incentive mechanisms. Otherwise, you'll only fall into transformation difficulties.

1. The Wrong but Reasonable Question

The most common question in boardrooms is:

"We've deployed Copilot or enterprise ChatGPT. How many people and how much cost can we save this year?"

This question is reasonable because CFOs are wired to compress uncertainty into numbers.

But it almost guarantees failure because it mistakenly treats AI as a "cheaper tool,"

not "a new layer of productivity structure." When you start with headcount-saving language,

your organization will respond by protecting headcount, and the whole thing will slowly suffocate in politics and incentives.

What you should really be asking is: "If AI becomes new production capacity that every team can deploy,

how should we change our authority, budget, performance, and risk boundaries?"


2. Why Mainstream Approaches Fail

Mainstream approaches usually look like this:

An AI Office or AI CoE (Center of Excellence) is set up,

10 use cases are selected, a few workshops are held, tools are purchased, some PoCs are done, then they expect each BU to spread it themselves.

Most results stop at "some people use it, it's kind of cool, but it doesn't change decisions." McKinsey's 2025 State of AI also points out,

most organizations are still stuck in experimentation or pilots, not truly scaling. (McKinsey & Company)

Why Will It Inevitably Crash?

First friction: Power structures don't allow "cross-process"

Where AI can truly generate compound returns is almost always cross-departmental,

such as sales to delivery, customer service to product, legal to marketing. But real budgets and KPIs are often locked within department walls.

Asking a manager to use their cost center to buy someone else's revenue

isn't transformation, it's charity.

Second friction: Incentive design will turn AI into "pure performance"

As long as performance evaluation still centers on "how many people you manage, how many resources you guard,"

managers will see AI as a threat. So you'll see the two most common pseudo-transformations:

  1. Superficial adoption, actually using AI as a search engine, no one dares touch processes.
  2. Letting frontline use AI to accelerate output, but filling saved time with more miscellaneous tasks, creating new execution debt.

Third friction: Risk and responsibility are left on the floor When AI output goes wrong, who takes the blame? Legal, security, brand, customer service frontline all fear being scapegoated.

So the safest strategy is "don't launch yet." This causes all departments to delay and dare not let AI truly go live.

3. Three Enterprise Cases Showing Real Conflicts, Misjudgments, and Corrections from AI

Shopify

Shopify Made "AI" a Prerequisite for Resource Requests, Hitting the Organization's Nerve

Shopify CEO Tobi Lütke's approach: Before teams can add headcount or resources,

they must first prove why AI can't do it. And AI proficiency is put into performance and peer evaluation contexts. (The Verge)

The key isn't "saving people," but changing the default of power:

Previously, resource requests defaulted to "give people first," now it defaults to "use AI first." But this also triggers conflicts:

  • Mid-level managers feel disempowered because headcount is their most direct source of influence
  • HR gets stuck because performance language changed, competency models must follow
  • Product and engineering will clash because "whether AI can do it" actually depends on processes, data, permissions, not how well you can talk

Shopify's signal is clear: AI isn't an IT project, but "the baseline of how we work."

This forces organizations to face something most people don't want to face:

Do you want to upgrade people with AI, or make people consume each other with AI?

Klarna

Klarna First Loudly Claimed AI Replaced 700 People, Then Brought "Human Customer Service" Back

Klarna publicly claimed in 2024 that its AI assistant could handle massive customer service conversations,

even described as equivalent to 700 full-time customer service staff. (Klarna)

But by 2025, Klarna began emphasizing letting customers "always choose humans,"

and started recruiting and adjusting approaches. (Bloomberg)

This pivot is crucial because it reveals what many CEOs initially refused to admit:

AI can compress average handling time, but it can also erode "trust."

Customer service isn't just solving problems, it's simultaneously handling emotions, gray areas, exceptions, and customers' feelings about the brand.

Klarna's correction is essentially saying: Cost curves can be rewritten by AI, but brand curves may not be borne by AI.


aircanada

Air Canada Chatbot Crash Made "Responsibility Attribution" a Public Lesson

Air Canada's case of being ordered to pay compensation due to chatbot providing incorrect information was widely reported and analyzed. (The Guardian)

For enterprise leaders, this isn't a "customer service bots are dangerous" story, but a "when AI speaks in your name, you can't treat it as outsourcing" story.

The real question is: Have you designed a clear responsibility chain for AI output, including review, authorization, exception handling, and remediation rhythm after errors?

4. From Reality to Research, and Back from Research to Reality

After reviewing the above three cases, let's return to reality to explore what's truly real:

  • AI can indeed boost productivity, but benefits are unevenly distributed. According to NBER research on customer service scenarios, after introducing generative AI assistance, productivity increased by about 14% on average, with gains more concentrated among less experienced or lower-skilled employees. (NBER) For CEOs, this means: AI isn't just cost savings, it's more like a mechanism to "rapidly diffuse tacit knowledge." But you must also accept the flip side: experts may not be faster, and may even experience quality fluctuations because processes are rewritten.
  • Most enterprises are stuck at "can do PoC, can't create value." BCG's 2024 data shows only a small portion of companies truly moved from AI to measurable value creation. Many companies remain at proof of concept, with an even smaller proportion creating value at scale. (BCG)
  • ROI below expectations is actually common. HBR's November 2025 article mentions that a significant proportion of senior executives believe AI adoption ROI is below expectations, with only a small portion believing it truly exceeded expectations. (HBR)

Putting these three points together, you'll find:

AI model progress doesn't automatically become competitiveness. Whether it becomes competitiveness depends on whether you've rewritten your organization's decisions and incentives.

5. Upgrading AI from "Tool" to "Long-term Competitiveness"

Here are 3 principles I've analyzed that are focused, actionable, and can improve organizations:

One: Treat AI as "New Production Capacity," Manage It as Portfolio, Not Projects

Project management's default goal is delivery. Portfolio management's default goal is proportional allocation.

AI's real value isn't "building a chatbot,"

but "which decisions and processes do you hand to new production capacity, which do you keep for humans, and how do they complement each other."

This directly changes your long-term competitiveness because it moves resources from within department walls to the weakest point of the entire company's value chain.

Two: Make Responsibility Chains Hard Rules, Otherwise Risk Will Eat All Benefits

You can't govern AI with "everyone be careful." You need clear Decision Rights:

  • What content can be auto-published
  • What must be human-reviewed
  • What must escalate to specific roles
  • What's the SOP for remediation when errors occur

Air Canada's lesson is that when AI speaks in your name, responsibility doesn't disappear because "it's a machine." (McCarthy)

Three: Rewrite Reward Mechanisms

Reward managers for "releasing capacity," not for "guarding headcount"

If organizations don't change reward methods, AI will only become a bottom-layer accelerator and mid-layer resistance source.

Why Shopify's approach is worth studying is that it made AI a prerequisite for resource requests,

essentially forcing everyone at the institutional level to acknowledge: using AI is part of the job. (The Verge)

Klarna's case of bringing humans back to collaborate with AI reminds us: releasing capacity doesn't mean eliminating people,

you need to move people to where AI doesn't do well, such as relationship building, urgent needs, brand trust. (CX Dive)

"If names are not correct, language will not be in accordance with truth."

If you don't clearly define AI's role and responsibility, you can't make cross-departmental collaboration smooth.

5. Action Guide for Leaders: Do / Don't

Do

  • Do make AI part of the "resource allocation mechanism." Like Shopify, make headcount requests, budget requests first answer "what can't AI do," not "what do we want." (The Verge)
  • Do establish 3-layer process routing: fully automated, human-reviewed then automated, must involve experts. This isn't tool selection, it's making risk and responsibility visible.
  • Do manage AI with two sets of metrics: efficiency metrics (time, cost, output volume) plus trust metrics (CSAT, complaint rate, return rate, escalation rate). Klarna's pivot is essentially trust metrics backfiring on efficiency narrative. (CX Dive)
  • Do make AI value into portfolio rhythm: 30 days for usable changes, 90 days to see process KPIs, 180 days to see if organizational capabilities and talent structure truly shift. Use rhythm to crush "infinite pilots."

Don't

  • Don't use "layoff stories" as transformation declarations. You'll get company-wide defensive posture, people start hiding problems, reducing transparency, and AI becomes a weapon in political battles.
  • Don't leave governance to legal or security alone. That becomes a permanent No. Governance must be co-owned with business because risk comes from process design, not just models.
  • Don't treat PoC as achievement. BCG's data already says it plainly: many companies are stuck at PoC, very few can truly scale and create value. (BCG Global)

6. "When the higher and lower ranks have the same desire, victory is assured." — The Art of War

As a senior executive, you're responsible for making all levels act within the same set of rewards and responsibilities,

otherwise AI will only be treated as someone else's KPI.

So!!!!

Stop asking who uses AI more, start asking

"Which processes have been rewritten because of AI?" "Have responsibility chains and reward mechanisms been updated in sync?"

When organizations start managing with this question, your company's AI will transform from "flashy cost center" to "long-term competitiveness engine."

延伸閱讀:Why Wanting to Do Everything is a Red Flag for Startup Partners

支持我,持續關注更多優質好文: Buy Me A Coffee

Thanks for reading,
- Tim

More articles