AI is not a tech problem.
It's a people and operations problem. Pick a door.
AI Fluency. Twelve months. A thousand people.
The engagement we ship the most of. A baseline. A leader cohort. Champions in every team. A review process that ships builds in days, not months. Three re-measurements that prove the lift is real.
- Baseline — every team, day one
- Leaders first — hands-on, until they can demo
- Champions — one per function, protected time
- Review — Monday idea, Friday ship
- Re-measure — 90, 180, 365 days
The AI Paradox.
Why most rollouts haven't moved the number. The thesis in one line — companies bolted AI onto broken workflows and skipped the organizational homework.
no impact
2024
lag
- Find the real bottleneck first
- Cap tools at three per role
- Redesign the workflow, then drop the tool
- Capture the time AI frees up
- Protect the junior pipeline
From principles to practice.
Most companies bought the licenses months ago. Fluency is a different purchase, and most haven't made it. A year in, five percent of the workforce uses AI daily. A third tried it once and forgot. Everyone else still emails PDFs around. This engagement closes that gap across a thousand people and puts a real number on how far we moved it.
What actually gets shipped.
The five principles fit on one page. Making them real across a thousand people takes a year. Each one maps to a workstream with real deliverables and a real measurement cadence.
Ship culture, not a strategy deck.
No 40-page PDF. Culture directly: a company-wide wins channel the CEO posts in first, monthly show-and-tells at all-hands, an internal gallery where builds get stolen freely, office visits, hackathons with real prizes. The deliverable is a culture where AI is how the work gets done.
Measure everyone. Name the expectation.
Day one, we measure where every employee stands. Seat-level usage, fluency survey, interviews across every function. The baseline is the scoreboard. Leadership names AI as part of everyone's job, publicly and repeatedly. If the number doesn't move at 90, 180, and 365, we didn't do the job.
Leaders first. Champions in every team.
Every director, VP, and C-level trained until they can demo their own workflows. Then one champion embedded per function, with protected time. Workshops, dept-level hackathons, weekly drop-ins in every office. Training central. The building lives in the teams that know which problems are worth solving.
Fast review. Real guardrails.
A review path for anything touching production or clients. Days, not weeks. Templates for the common cases. Real guardrails without ticket queues, steering committees, or compliance bottlenecks. Monday idea, Friday ship. We build the process and train the reviewers.
Remove the friction. Share the wins.
Pre-approved tools covering 90% of what people want to do. Data access policies in plain language. A shared library of prompts, templates, and finished builds anyone can fork. When someone builds something good, the whole company sees it within a week.
- Seat usage audit
- 1,000-person survey
- Leadership interviews
- Baseline report
- Exec hands-on
- CEO in wins channel
- Champions named
- All-hands commit
- Champions embedded
- Review live
- Tools pre-approved
- 90-day re-measure
- Dept hackathons
- Office drop-ins
- Gallery live
- 6-month re-measure
- Advanced training
- Library compounds
- Final re-measure
- Litmus test passed
What lands on your desk.
Six concrete artifacts. Every one traceable to a baseline number and a re-measurement.
A fluency baseline.
Where every team stands on day one. Usage data, survey results, interviews — a one-page read for leadership. The number we measure against.
An executive playbook.
Hands-on training for every director, VP, and C-level until they can demo their own workflows. The playbook they carry into every team meeting.
A champion network.
One embedded champion per function. Protected time. Monthly syncs. A group that outlasts the engagement and keeps compounding.
A review process.
A path from idea to shipped build in days. Templates for the common cases. Guardrails that protect without blocking. Reviewers trained.
A shared library.
Pre-approved tools, prompts and templates, a gallery of finished builds anyone can fork.
Three re-measurements.
At 90 days, 6 months, and 12 months. Same cohort, same survey, same audit. The proof the number moved.
The right fit looks like this.
A company of 500 to 2,500 people. Licenses purchased, adoption stalled. A CEO who can credibly post first in a wins channel. Leadership willing to name AI as part of everyone's job and mean it.
Willingness to measure honestly and let the results be what they are.
Looking for a framework doc to hand to a working group. Want a 12-week "AI transformation" that finishes before Q4. Can't get the CEO in the room for four hours. Prefer governance theater to measured lift.
We'd rather tell you now than six months in.
The data doesn't match the hype.
Ninety percent of firms report no productivity impact from AI. Not because the technology isn't ready. Because they bolted it onto broken workflows and skipped the organizational homework. That's a people and operations problem. Which is what we fix.
Apollo's Torsten Slok put it bluntly: "AI is everywhere except in the incoming macroeconomic data." That's the paradox in one sentence. Any CEO whose AI budget is tracking above last year's number should stop and look twice at the output.
The gap between narrative and number now has its own literature. The question isn't when the gains will land. It's who captures them.
Quote Torsten Slok, Chief Economist, Apollo Global Management.
Same paradox. Forty years later.
The 1970s and 1980s saw enormous IT investment with almost no productivity gain — until the mid-1990s. The dominant explanation: "lags due to learning and adjustment." IT benefits take two to five years. They only appear when firms reorganize around the technology.
The firms that won the post-1995 boom were the ones that used the lag period to re-engineer how work got done. Not the ones with the most PCs. Not the ones with the biggest IT budgets. The ones that did the organizational homework.
The returns came from the complementary investment — process redesign, new job descriptions, retraining, different performance metrics, different decision rights. Walmart didn't win the 90s because they had better computers. They won because they rewired the retail operating model around what the computers made possible.
The same lesson applies cleanly to AI. The gains are coming. They'll accrue to the firms that use this lag period to reorganize, not the ones still trying to solve the problem with more licenses.
Thesis Organizational investment, not more hardware.
Read Brynjolfsson · Lags-due-to-learning.
What's actually breaking.
The research tells the macro story. The ground tells the operational one. Across hundreds of threads from people inside the rollouts, a cleaner picture emerges — companies are bolting AI onto broken workflows. AI amplifies what's already there. If what's there is broken, you get faster broken.
The real bottleneck is upstream.
Organizations spend months speeding up the doing when the bottleneck was always the deciding. Scoping, prioritization, review, sign-off. AI accelerates the execution layer and leaves the decision layer more exposed. The queue doesn't shorten. It moves upstream.
Seniors get 4x leverage. Juniors lose the learning path.
AI automates the tasks juniors used to learn from: research, first drafts, synthesis. The senior with judgment becomes dramatically more productive. The junior who would become that senior in five years loses the apprenticeship. A three-to-five-year knowledge-work time-bomb. IBM's CHRO has flagged it publicly. Few firms have a plan.
Flagged byIBM CHRO (Fortune).
Faster output. Heavier review.
AI-assisted output is voluminous and uneven. More drafts, more decks, more emails — all generated in the time it used to take to produce one. The reviewer carries more cognitive load, not less. The "productivity gain" quietly transfers to the producer and burdens the person signing off.
Employees save time. The business doesn't see it.
A Stanford study tracked what workers did with AI-saved time. Honest answer: TV and friends. Why would anyone hand efficiency gains back for free without a mechanism? Without an explicit capture mechanism, the gains evaporate into slack. The P&L stays flat.
TakeawayBuild capture mechanisms or capture nothing.
Past three tools, productivity goes backwards.
BCG found that once workers exceed three AI tools, productivity drops from context-switching. The instinct — buy more tools, pilot more vendors, layer more copilots — is self-defeating. Pick two or three per role. Go deep.
CapThree per role. Case required for the fourth.
Five moves that actually stick.
The five moves we push on every engagement. Not glamorous. Not framework-shaped. The organizational homework the research says wins.
Find the real bottleneck first.
For most organizations, the blocker isn't speed. It's scoping, prioritization, review, or billing. We build a bottleneck diagnostic into every engagement so we don't apply AI to the wrong stage. A faster execution layer on top of a broken decision layer is worse than what you had before.
Two or three tools. Go deep.
The BCG brain-fry finding is one of the most practical insights in the literature. Resist the instinct to push ten tools at a team. Pick two or three high-leverage ones. Go deep on workflow integration. The shelf with fifteen tools is a shelf with zero adoption.
Pair every rollout with a redesigned workflow.
Central lesson from the Solow resolution: IT delivered returns only when paired with complementary organizational investment. Translated to AI — no training session ends with "here's the tool." It ends with "here's the redesigned workflow, here's what stops being done, here's how the handoff changes."
When AI frees 20% of capacity, where does it go?
Without an explicit capture mechanism, saved time evaporates. Rationally. The question: when AI frees up a fifth of someone's week, where does it go? Higher-value work? New revenue? A shared pool? Pick one and commit.
Build the apprenticeship back, deliberately.
AI automates the tasks juniors learned from. Unless you rebuild the development path on purpose, you're cooking the future senior bench. Structured problem-solving reps without AI. Required rationale-writing. Supervised AI-output review. A leadership-pipeline risk, not nostalgia.
It's a people and operations problem.
That's what we do.
The homework. While everyone else buys licenses.
The Solow paradox took twenty years to resolve. The firms that won in the 90s used the lag to rewire themselves. The rest were busy buying hardware. That's the pitch.
Everything above is operational work dressed up as a tech rollout. Decision rights. Process redesign. Pipeline development. Compensation structures that capture gains. Review rhythms. Tool discipline. These are the things we've been doing for twenty years, before anyone called it AI strategy.
Our AI Fluency engagement is the operator version of the playbook. A baseline. A leader cohort. Champions in every team. A review process that ships builds in days. Three re-measurements over twelve months. Not a training course — an organizational rewire, with AI as the reason we're doing it now.
If you're a CEO watching the $250B bet not show up on your own P&L yet, we help you be one of the firms where it eventually does.
Read Next AI Fluency Engagement · 7 min.
Ready to do the homework?
First conversation is free. Ninety minutes with your leadership team. We tell you honestly where your bottleneck is, which move matters most, and whether we're the right hands to help. No deck. No deliverable. No obligation.