Back to essays

AI Reveals Your Org Chart

AI adoptionenterpriseorg designchange management

When AI deployments stall, the most common diagnosis is technical friction. Wrong model, wrong integration, wrong vendor. Fix the stack and adoption follows.

That diagnosis is almost always wrong.

The blockers I see most often are not technical. They are organizational. And what makes them hard is not that they are hidden, it is that they look like technical problems from a distance.


The org chart you have vs. the org chart you drew

Every organization has two org charts.

The first is the one on the intranet: boxes, reporting lines, a dotted line here and there. The second is the one that runs the actual work: who you call when you need a fast answer, who the decision really waits for, where approval authority actually lives versus where it's supposed to live.

These two charts diverge in every firm. In most, they diverge significantly.

AI adoption follows the second chart. Not the first.

When you deploy an AI tool that cuts across team boundaries, you are not bumping into an integration problem. You are bumping into the informal power structures those boundaries protect. The compliance team that keeps reopening requirements is not being obstructive. They are protecting something the org chart doesn't show. The line manager who won't adopt the new assistant isn't resistant to technology. She is protecting headcount that justifies her role.


What the stalls tell you

There is a pattern to where AI pilots stall, and it is worth reading closely.

  • If it stalls at legal or compliance, the real issue is usually ambiguity about who owns the acceptable-use decision. No one wants to be the person who approved the policy that caused the incident. The technical solution exists. The political one doesn't.

  • If it stalls at the manager layer, the real issue is usually incentives. Managers measured on throughput per head do not want tools that make their team smaller. They want tools that make their team look better. Same AI, completely different ask.

  • If it stalls after a successful pilot, the real issue is almost always that the pilot worked in a context that doesn't exist at scale. The power user who ran the pilot had context, access, and credibility that didn't transfer to the rollout population.

  • If it stalls before it starts, someone in a room said "not yet." And "not yet" usually means a different conversation needs to happen first, about ownership, about credit, or about a risk that no one has been willing to name out loud.

None of these are technology problems.


AI is a stress test, not a solution

The uncomfortable implication is that an AI deployment doesn't just reveal your technology maturity. It reveals your organizational maturity.

Where work is well-defined, authority is clear, and incentives are aligned, AI tends to land cleanly. The blockers are real but solvable.

Where work is ambiguous, authority is contested, and incentives pull in different directions; AI makes that worse before it makes anything better. The tool adds surface area to existing conflicts. Every workflow gap, every undocumented exception, every informal workaround becomes a visible problem that someone now has to own.

This is why AI readiness assessments that only check for cloud infrastructure and data quality miss most of what matters. The harder audit is organizational: who owns this decision, who has blocking power, where do incentives break down.

The firms that deploy AI most effectively are not necessarily the most technically sophisticated. They are the most organizationally honest.


What to do with this

You cannot fix your org chart by deploying AI into it. But you can use an AI deployment as a structured way to see your org chart clearly.

Every place a rollout hits unexpected friction is a place worth investigating: not as a technical problem to unblock, but as an organizational signal to read. What is being protected here? Whose authority or credit is at stake? What conversation has not happened yet?

The question is not why the AI isn't working. The question is what the AI is showing you about how the organization actually works.

Sometimes the answer is a genuine governance gap that predates the AI project by a decade. Sometimes the answer is that one stakeholder hasn't been properly brought in. Both are solvable. But you cannot solve either by adjusting the prompt.