CEO Analysis: How to Develop a Builder Mindset?
"Should we roll out AI company-wide? Which tool should we buy so we don't fall behind?"

This is a reasonable anxiety, because 2024 to 2025 does seem like an AI transformation watershed. But this question often leads organizations down a hard path that isn't necessarily effective. Teams spend huge amounts of time testing different AI tools, but the focus is completely wrong.
The common market intuition is to treat AI as a "more powerful software procurement": buy tools, deploy systems, run training, finally require everyone to go live in a top-down way. But the real challenge is never just "will everyone use it," but "who defines problems, who decides processes, who takes responsibility for success or failure." (See CEO Analysis: You Think You're Asking "How Many People Can AI Save," But You're Actually Asking "Who Takes Responsibility for Results")
If you see this as a problem of power structure and incentive design, many recurring bottlenecks become understandable.
▍Why Mainstream Top-Down Procurement Approaches Are Increasingly Unworkable
Most enterprises are taking a path that looks roughly like this:
- IT or digital transformation teams pick a tool.
- Push internal training and usage guidelines.
- Use usage rate or seat count as KPIs.
- Hit the brakes at the first sign of security, compliance, or data leakage concerns.
It looks complete, but often fails in three places:
- First failure point: Power is in "approval" not "output." AI makes "building a usable system" easier, but also makes existing Gatekeepers in organizations more likely to block with risk language. Many teams aren't unwilling, but lack sufficient decision-making power to touch data, processes, cross-departmental responsibilities. In the end, everyone just makes presentations, writes summaries, produces outputs that look busy.
- Second failure point: Incentives reward usage behavior, not result loops. When KPIs become "how many times used" or "did you turn on Copilot," people naturally use AI on low-risk, low-impact work. What should really be restructured are value loops that bring ROI, such as lead generation loops, quoting loops, customer service loops, internal approval loops.
- Third failure point: Treating AI as automation, ignoring it's "unstable labor." GenAI's characteristic is strong capability but unclear boundaries. Researchers call it jagged frontier, meaning it performs like a senior expert on some tasks, but suddenly drops to novice level on others. This isn't users not trying hard enough, it's the nature of the technology. (Harvard Business School)
So mainstream approaches easily fall into awkwardness: significant investment, but only scattered improvements, and organizations conclude "AI is trendy but not practical." This is actually quite unfortunate.
▍Transitioning from "Tool User" to "Builder"
I want to use three cases to clarify the path from "tool user to system builder (Builder)." Their common point: the protagonists aren't engineers, and didn't plan to become engineers initially, but they did the same thing: put AI into a measurable business loop.
Case A: Turning "Trend Finding" into a Repeatable Agent Process
Slate marketing VP Andrew Harding's struggle: track trends, watch public video data, do benchmarks (blah blah blah endless 🥱). These tasks previously required massive manual searching and organizing. Later he used Zapier Agents to build a lightweight "Trend Scout" process, turning time-consuming information gathering into automated structured input, and connecting these inputs back to marketing and sales daily rhythms. (Zapier)
The key isn't "what tool he used," but that he put AI into a verifiable loop. Zapier's case article also mentions they generated 2,000+ leads in one month through this process, making it scalable.
Case B: Using Agents to Converge Research Work into "Stable Daily Output"
NisonCo founder Evan Nison handed PR and SEO front-end research work to AI Agents: daily news scanning, extracting company and person information, writing to Google Sheets, making research into a continuous, low-friction, trackable machine.
What's most worth seeing in the case isn't "saving manpower," but that it redesigned rhythm. Zapier's data mentions this automation increased leads from 270 per week to nearly 400, bringing 48% growth, while estimating about $2,500 monthly cost savings. (Zapier)
This approach is essentially saying:
When software output costs drop rapidly, competitive advantage moves to "who can design processes into sustainable systems."
Case C: Let People Who Understand the Field Build Tools, Not Queue for IT
EquipmentShare's case has a very common background: construction equipment rental and management is an industry that's "complex on-site, fragmented processes, fast-changing." They didn't rely solely on centralized IT for all systems, but used low-code platforms like Retool, letting people close to business turn pain points into internal tools, gradually expanding to multi-user applications. (Retool)
What I love about this case is that it moved power from "people who can write code" to "people who understand processes best." This is actually the prototype of the Builder role.
(Attached: Me at MIT Bootcamp sleeping less than 2 hours for 5 consecutive days as a builder 💀)

▍Academic Research and Market Data
If we abstract the above three cases, they're all doing the same thing: using GenAI to turn "individual capability" into "replicable organizational capacity." Research is pointing in the same direction.
-
MIT (Noy & Zhang) research observed in daily writing tasks: participants using ChatGPT saw average completion time drop 40%, quality improve 18%. More interestingly, weaker performers improved more, gaps actually narrowed. (Science)
-
NBER research (Brynjolfsson, Li, Raymond) found in customer service scenarios: after introducing generative AI assistants, average productivity increased 14%, with novices or low-skilled employees seeing up to 34% improvement, and observed changes in customer interaction sentiment and employee retention. (NBER)
-
HBS research and BCG experiments remind us of an easily overlooked reality: GenAI value isn't evenly distributed. People are greatly enhanced on some tasks, but on tasks beyond model boundaries, may produce errors faster, and very persuasively wrong.
-
McKinsey's estimates point out from a more macro perspective that generative AI could bring $2.6 to $4.4 trillion in economic value annually across multiple functions. These estimates don't mean every company gets equal dividends, but they at least remind us: this isn't a single-point tool upgrade, but a rewrite of the production capacity curve. (McKinsey & Company)
If we look at the above enterprise cases and academic (market) research together, we'll find:
AI will amplify capability gaps, and also narrow capability gaps. The key depends on whether you put it in the right system position.
▍Key Insights and Thinking Framework
I currently condense "evolving from technology consumer to system builder" into three principles:
Principle 1: Don't Think in "Tools," Think in "Value Loops"
Before answering "which AI to use," Builders first plan and choose a loop, such as:
- Lead to meeting loop
- Quote from need to deliverable loop
- Complaint from receipt to closure loop
Then put AI at the bottleneck point that most changes throughput. This immediately involves power structure questions: Can your permissions let AI sync company internal data? Or can you yourself not even see the backend status of current processes?
Principle 2: Treat "Context" as Asset, Context is the New Moat
In the AI era, code-writing scarcity declines, but "correctly understanding problems" scarcity rises. Context includes: data definitions, exceptions, internal rules, customer language, product compromises.
Stanford CS146S explicitly states in its course description that AI is changing how software is built and maintained. What ultimately separates organizations isn't who has better prompts, but who turns context into accumulable, transferable, auditable systems.
Principle 3: Treat AI as "New Type of Labor," Then Have Governance and Validation Mechanisms
The key here isn't compliance slogans, but realistic risk management: you need human-in-the-loop, traceable inputs/outputs, know where errors will be blocked. This is why the "jagged frontier" reminder is important, because it determines which tasks suit automation, which only suit assistance. (Harvard Business School)
▍Words for Decision Makers
If you're in the stage of "wanting to push AI, but don't want it to become a revolutionary transformation," maybe use a gentler but effective sequence.
1. First choose a loop you're willing to vouch for Not the biggest, most ideal one, but one where you can see metric changes in 2 to 6 weeks. Metrics can be practical, such as turnaround time, funnel conversion rate, cases processed per week.
2. Talk about decision rights first, then tool procurement Who owns processes, who owns data definitions, who handles exceptions. Get these clear first, adoption costs will drop significantly. Conversely, if decision rights are unclear, more tools only create more chaos, and you'll find many tools stuck in pipeline not knowing whether to proceed with procurement.
3. Write "AI baseline" into culture, but measure it with results Shopify CEO Tobi Lütke's internal memo sparked discussion. Shopify set AI usage as a baseline expectation, even requiring proof that AI can't do the work before adding headcount. (The Verge) Not every company needs to copy this exactly, but their spirit is worth learning:
AI isn't an elective, it's a new work language. The difference is, you measure output and responsibility—substantive KPIs—not just usage traces.
4. Build trust with small-scale "systematic victories," not random slogans When the first loop runs smoothly, the organization's understanding of risk, benefits, and responsibility boundaries becomes more concrete. Expanding AI tool usage will then come naturally, because you're not convincing people to believe in the future, you're showing an already-happening reality.
▍Become a Builder!
Many people understand this AI wave as "software got cheaper, so everyone needs to learn coding." I'd prefer another way of saying it:
Replicable capacity became easier, so what's truly scarce are people who can put capacity in the right place.
Sam Altman once talked about the possibility of "one-person unicorns," even saying in his small circle people are betting the first "one person can reach a billion dollars" company will appear in the first year. (Fortune) Whether this happens on schedule or not, it's reminding us of a structural change: future company scale will completely decouple from headcount. How much capacity you can deploy, whether you can continuously make correct decisions, are the real keys to pushing company ceilings higher.
When AI makes "doing" very easy, leaders' real responsibility becomes more like a system designer, deciding which things are worth doing, which aren't worth doing faster.