AI at Vekta Two Tracks The Engagement · The Op-Ed AI Is Not a Tech Problem It's a People & Operations Problem Stockholm · April 2026 AI at Vekta Two Tracks The Engagement · The Op-Ed AI Is Not a Tech Problem It's a People & Operations Problem Stockholm · April 2026
Reference Cheatsheet
AI at Vekta·No. 01
Two Tracks·Stockholm·Apr 2026
§ The AI Tab

AI is not a tech problem.

It's a people and operations problem. Pick a door.

§ The Engagement

AI Fluency. Twelve months. A thousand people.

The engagement we ship the most of. A baseline. A leader cohort. Champions in every team. A review process that ships builds in days, not months. Three re-measurements that prove the lift is real.

  • Baseline — every team, day one
  • Leaders first — hands-on, until they can demo
  • Champions — one per function, protected time
  • Review — Monday idea, Friday ship
  • Re-measure — 90, 180, 365 days
Read the full engagement →
§ The House Position

The AI Paradox.

Why most rollouts haven't moved the number. The thesis in one line — companies bolted AI onto broken workflows and skipped the organizational homework.

~90%
Firms report
no impact
$250B
Spent in
2024
~20 yr
Solow
lag
  • Find the real bottleneck first
  • Cap tools at three per role
  • Redesign the workflow, then drop the tool
  • Capture the time AI frees up
  • Protect the junior pipeline
Read the op-ed →
§ The Engagement

From principles to practice.

Most companies bought the licenses months ago. Fluency is a different purchase, and most haven't made it. A year in, five percent of the workforce uses AI daily. A third tried it once and forgot. Everyone else still emails PDFs around. This engagement closes that gap across a thousand people and puts a real number on how far we moved it.

Scope
~1,000 · People
Timeline
90 Days · to 12 Mo.
Format
Live · On-Site
Deliverable
A Measurable Lift
§ 01 · Five Principles

What actually gets shipped.

The five principles fit on one page. Making them real across a thousand people takes a year. Each one maps to a workstream with real deliverables and a real measurement cadence.

01§ Workstream
Principle · Culture beats documents

Ship culture, not a strategy deck.

No 40-page PDF. Culture directly: a company-wide wins channel the CEO posts in first, monthly show-and-tells at all-hands, an internal gallery where builds get stolen freely, office visits, hackathons with real prizes. The deliverable is a culture where AI is how the work gets done.

Signal The CEO posts first in the wins channel. If that doesn't happen, we stop and re-sell the engagement.
02§ Workstream
Principle · Everyone experiments

Measure everyone. Name the expectation.

Day one, we measure where every employee stands. Seat-level usage, fluency survey, interviews across every function. The baseline is the scoreboard. Leadership names AI as part of everyone's job, publicly and repeatedly. If the number doesn't move at 90, 180, and 365, we didn't do the job.

Cadence Baseline · 90 · 180 · 365. Same cohort. Same instrument. The scoreboard is the engagement.
03§ Workstream
Principle · Train centrally, apply bottom-up

Leaders first. Champions in every team.

Every director, VP, and C-level trained until they can demo their own workflows. Then one champion embedded per function, with protected time. Workshops, dept-level hackathons, weekly drop-ins in every office. Training central. The building lives in the teams that know which problems are worth solving.

Unit One champion per ~30 people. 4 hrs/week protected. Direct line to us. Outlasts the engagement.
04§ Workstream
Principle · Lightweight review, not committee theater

Fast review. Real guardrails.

A review path for anything touching production or clients. Days, not weeks. Templates for the common cases. Real guardrails without ticket queues, steering committees, or compliance bottlenecks. Monday idea, Friday ship. We build the process and train the reviewers.

Bar Idea Monday · Shipped Friday. More than five business days and it's a queue, not a review.
05§ Workstream
Principle · Make safe building easy

Remove the friction. Share the wins.

Pre-approved tools covering 90% of what people want to do. Data access policies in plain language. A shared library of prompts, templates, and finished builds anyone can fork. When someone builds something good, the whole company sees it within a week.

Library Prompts, templates, finished builds. The infrastructure that makes the safe path the easy path.
§ 02 · The 12-Month Arc
Starts with a number. Ends with a number.
Five phases · Three re-measurements · One scoreboard
01
Weeks 1–2
Baseline
  • Seat usage audit
  • 1,000-person survey
  • Leadership interviews
  • Baseline report
02
Weeks 3–6
Leaders First
  • Exec hands-on
  • CEO in wins channel
  • Champions named
  • All-hands commit
03
Day 90
First Lift
  • Champions embedded
  • Review live
  • Tools pre-approved
  • 90-day re-measure
04
Months 4–6
Scale
  • Dept hackathons
  • Office drop-ins
  • Gallery live
  • 6-month re-measure
05
Month 12
Prove
  • Advanced training
  • Library compounds
  • Final re-measure
  • Litmus test passed
§ 03 · The Sweet Spot
Fifteen people. One room. The nervous skeptic next to the engineer already shipping internal tools.
The Söderberg format · Live prompting · Live building · People learn fastest when they watch someone at their level do something they didn't think they could do.
§ 04 · Deliverables

What lands on your desk.

Six concrete artifacts. Every one traceable to a baseline number and a re-measurement.

A fluency baseline.

Where every team stands on day one. Usage data, survey results, interviews — a one-page read for leadership. The number we measure against.

An executive playbook.

Hands-on training for every director, VP, and C-level until they can demo their own workflows. The playbook they carry into every team meeting.

A champion network.

One embedded champion per function. Protected time. Monthly syncs. A group that outlasts the engagement and keeps compounding.

A review process.

A path from idea to shipped build in days. Templates for the common cases. Guardrails that protect without blocking. Reviewers trained.

A shared library.

Pre-approved tools, prompts and templates, a gallery of finished builds anyone can fork.

Three re-measurements.

At 90 days, 6 months, and 12 months. Same cohort, same survey, same audit. The proof the number moved.

§ 05 · Fit

The right fit looks like this.

A good fit.
Say Yes If This Is You

A company of 500 to 2,500 people. Licenses purchased, adoption stalled. A CEO who can credibly post first in a wins channel. Leadership willing to name AI as part of everyone's job and mean it.

Willingness to measure honestly and let the results be what they are.

Probably not.
Say No If This Is You

Looking for a framework doc to hand to a working group. Want a 12-week "AI transformation" that finishes before Q4. Can't get the CEO in the room for four hours. Prefer governance theater to measured lift.

We'd rather tell you now than six months in.

§ 01 · The Paradox

The data doesn't match the hype.

Ninety percent of firms report no productivity impact from AI. Not because the technology isn't ready. Because they bolted it onto broken workflows and skipped the organizational homework. That's a people and operations problem. Which is what we fix.

Firms · No Measurable Impact
~90%
NBER study, 6,000 executives, four markets, three years.
Corporate AI Spend · 2024
$250B+
Global enterprise spend on AI licenses, tooling, infrastructure.
Avg Exec AI Use · Per Week
1.5hrs
Of those who use AI. Two-thirds do. A third don't open it.
Executives Not Using AI
25%
Zero usage. License sitting on the account, untouched.
§The Apollo Line

Apollo's Torsten Slok put it bluntly: "AI is everywhere except in the incoming macroeconomic data." That's the paradox in one sentence. Any CEO whose AI budget is tracking above last year's number should stop and look twice at the output.

The gap between narrative and number now has its own literature. The question isn't when the gains will land. It's who captures them.

Source NBER Working Paper · 2025.

Quote Torsten Slok, Chief Economist, Apollo Global Management.
§ Solow · 1987
"You can see the computer age everywhere except in the productivity statistics."
— Robert Solow · Nobel Laureate · 1987 · Forty years before anyone said "AI transformation"
§ 02 · The Pattern

Same paradox. Forty years later.

The 1970s and 1980s saw enormous IT investment with almost no productivity gain — until the mid-1990s. The dominant explanation: "lags due to learning and adjustment." IT benefits take two to five years. They only appear when firms reorganize around the technology.

§What We Know

The firms that won the post-1995 boom were the ones that used the lag period to re-engineer how work got done. Not the ones with the most PCs. Not the ones with the biggest IT budgets. The ones that did the organizational homework.

The returns came from the complementary investment — process redesign, new job descriptions, retraining, different performance metrics, different decision rights. Walmart didn't win the 90s because they had better computers. They won because they rewired the retail operating model around what the computers made possible.

The same lesson applies cleanly to AI. The gains are coming. They'll accrue to the firms that use this lag period to reorganize, not the ones still trying to solve the problem with more licenses.

Window IT paradox ~1975–1995. Resolved mid-90s onward.

Thesis Organizational investment, not more hardware.

Read Brynjolfsson · Lags-due-to-learning.
§ 03 · The Bottleneck

What's actually breaking.

The research tells the macro story. The ground tells the operational one. Across hundreds of threads from people inside the rollouts, a cleaner picture emerges — companies are bolting AI onto broken workflows. AI amplifies what's already there. If what's there is broken, you get faster broken.

01§ Failure Mode
Decision-making, not execution

The real bottleneck is upstream.

Organizations spend months speeding up the doing when the bottleneck was always the deciding. Scoping, prioritization, review, sign-off. AI accelerates the execution layer and leaves the decision layer more exposed. The queue doesn't shorten. It moves upstream.

SignalTop-voted manager comment across internal threads. Consistent with our client reads.
02§ Failure Mode
Junior pipeline collapse

Seniors get 4x leverage. Juniors lose the learning path.

AI automates the tasks juniors used to learn from: research, first drafts, synthesis. The senior with judgment becomes dramatically more productive. The junior who would become that senior in five years loses the apprenticeship. A three-to-five-year knowledge-work time-bomb. IBM's CHRO has flagged it publicly. Few firms have a plan.

Risk3–5 years to senior-talent shortage.

Flagged byIBM CHRO (Fortune).
03§ Failure Mode
Work slop

Faster output. Heavier review.

AI-assisted output is voluminous and uneven. More drafts, more decks, more emails — all generated in the time it used to take to produce one. The reviewer carries more cognitive load, not less. The "productivity gain" quietly transfers to the producer and burdens the person signing off.

PatternReview burden migrates upward. Senior time gets eaten by low-quality drafts.
04§ Failure Mode
Time saved → slack, not value

Employees save time. The business doesn't see it.

A Stanford study tracked what workers did with AI-saved time. Honest answer: TV and friends. Why would anyone hand efficiency gains back for free without a mechanism? Without an explicit capture mechanism, the gains evaporate into slack. The P&L stays flat.

SourceStanford · AI time reallocation.

TakeawayBuild capture mechanisms or capture nothing.
05§ Failure Mode
AI brain fry

Past three tools, productivity goes backwards.

BCG found that once workers exceed three AI tools, productivity drops from context-switching. The instinct — buy more tools, pilot more vendors, layer more copilots — is self-defeating. Pick two or three per role. Go deep.

StudyBCG · AI tool sprawl & productivity.

CapThree per role. Case required for the fourth.
§ The Clean Sentence
Winners won't be the companies that deploy the most AI. They'll be the ones that redesign workflows around it.
The organizational homework. The thing nobody wants to do. The thing that always ends up being the work.
§ 04 · The Playbook

Five moves that actually stick.

The five moves we push on every engagement. Not glamorous. Not framework-shaped. The organizational homework the research says wins.

01§ Move
Diagnose before prescribing

Find the real bottleneck first.

For most organizations, the blocker isn't speed. It's scoping, prioritization, review, or billing. We build a bottleneck diagnostic into every engagement so we don't apply AI to the wrong stage. A faster execution layer on top of a broken decision layer is worse than what you had before.

DeliverableTwo-week bottleneck audit. Top five constraints, ranked, with AI mapped where it fits.
02§ Move
Cap tool sprawl at three per role

Two or three tools. Go deep.

The BCG brain-fry finding is one of the most practical insights in the literature. Resist the instinct to push ten tools at a team. Pick two or three high-leverage ones. Go deep on workflow integration. The shelf with fifteen tools is a shelf with zero adoption.

RuleThree per role, max. A fourth requires a written case, reviewed.
03§ Move
Process first. Tool second.

Pair every rollout with a redesigned workflow.

Central lesson from the Solow resolution: IT delivered returns only when paired with complementary organizational investment. Translated to AI — no training session ends with "here's the tool." It ends with "here's the redesigned workflow, here's what stops being done, here's how the handoff changes."

FormatWorkshop ends with a new process diagram and what gets removed. The tool slots into that process.
04§ Move
Capture the reclaimed time

When AI frees 20% of capacity, where does it go?

Without an explicit capture mechanism, saved time evaporates. Rationally. The question: when AI frees up a fifth of someone's week, where does it go? Higher-value work? New revenue? A shared pool? Pick one and commit.

OptionsHigher-value reallocation · shared efficiency pool · reduced headcount plans · customer-facing hours.
05§ Move
Protect the junior pipeline

Build the apprenticeship back, deliberately.

AI automates the tasks juniors learned from. Unless you rebuild the development path on purpose, you're cooking the future senior bench. Structured problem-solving reps without AI. Required rationale-writing. Supervised AI-output review. A leadership-pipeline risk, not nostalgia.

TrackDedicated "junior development under AI" stream. Explicit reps. Explicit coaching. Explicit metrics.
§ The House Position
AI is not a tech problem.
It's a people and operations problem.
That's what we do.
— Vekta · April 2026
§ 05 · The Practice

The homework. While everyone else buys licenses.

The Solow paradox took twenty years to resolve. The firms that won in the 90s used the lag to rewire themselves. The rest were busy buying hardware. That's the pitch.

§Why Vekta

Everything above is operational work dressed up as a tech rollout. Decision rights. Process redesign. Pipeline development. Compensation structures that capture gains. Review rhythms. Tool discipline. These are the things we've been doing for twenty years, before anyone called it AI strategy.

Our AI Fluency engagement is the operator version of the playbook. A baseline. A leader cohort. Champions in every team. A review process that ships builds in days. Three re-measurements over twelve months. Not a training course — an organizational rewire, with AI as the reason we're doing it now.

If you're a CEO watching the $250B bet not show up on your own P&L yet, we help you be one of the firms where it eventually does.

Read the full AI Fluency engagement →

The Tie-In Flows into our engagement — the 1,000-person, 12-month rollout that operationalizes every one of the five moves.

Read Next AI Fluency Engagement · 7 min.
§ Contact

Ready to do the homework?

First conversation is free. Ninety minutes with your leadership team. We tell you honestly where your bottleneck is, which move matters most, and whether we're the right hands to help. No deck. No deliverable. No obligation.

Phone
+46 76 118 18 15
Location
Stockholm, Sweden