A Bridge to AI

A Bridge to AI

🔐 Premium Members

RACI is Broken: Why AI Demands a Rethink of Roles, Responsibility, and Change

Dee McCrorey
Jul 23, 2025
∙ Paid

The Deep Dive | Words: 1,912 | Reading time: ~8 minutes

In most organizations, change doesn’t fail because of the tech—it fails because no one’s sure who’s responsible, how decisions get made, or what to do when the system spits out something weird—or worse, wrong.

Enter RACI: a staple of team alignment and accountability for decades. But AI is testing its limits. When tools behave like collaborators—but lack accountability—RACI begins to fray. If we’re going to lead AI, not be led by it, we need frameworks that account for both human and machine contributions. And we need change management strategies that are just as adaptive as the tech itself.

đŸ§© Quick Primer: What Is RACI?

  • R = Responsible (does the work)

  • A = Accountable (owns the outcome)

  • C = Consulted (advises)

  • I = Informed (kept in the loop)


❌ Why RACI Is Breaking Under Pressure

RACI was built for predictable projects, stable org charts, and clearly defined roles. But AI breaks those assumptions.

  • Problem: RACI is static—and AI is not.
    Traditional frameworks assign responsibility like boxes on an org chart. But AI doesn’t have a title. It doesn’t attend standups. Yet it shapes decisions, nudges outcomes, and introduces new kinds of risk.

  • Tension: AI introduces ambiguity, black-box risk, and ethics gaps.
    When AI contributes to an output, who’s accountable for the result? When models hallucinate or bias creeps in, is it the engineer? The product lead? The vendor? The gaps in RACI become more than just a workflow problem—they become ethical, operational, and legal risks.

  • Resolution: Adapt RACI—and re-center humans in the loop.
    AI doesn’t eliminate the need for clarity. It makes it more urgent. Organizations must evolve their role-mapping to account for tools that shape outcomes but lack agency. This doesn’t mean inventing a new acronym—it means embracing change leadership that sees RACI as a starting point, not a stopping point.

The future of accountability isn’t about replacing humans with AI.
It’s about making sure humans stay responsible—even when machines contribute.


Why Traditional RACI Falls Short in AI Projects

RACI works well when roles are fixed, and workflows are linear. But AI introduces fuzziness into both. The lines between responsible, accountable, consulted, and informed don’t just blur—they bend.

  • Responsibility gets blurry.
    When AI generates content, insights, or decisions, who “owns” the output? A prompt isn’t authorship. A model isn’t an employee. Yet what it produces impacts people, products, and policy.

  • Accountability becomes diffuse.
    Tools don’t carry job titles. You can’t assign blame—or praise—to a language model. And when multiple teams use the same AI system, accountability often gets diluted across departments.

  • Consulted often becomes performative.
    Legal, DEI, and ethics teams are looped in after decisions are made—when the launch is near and the real influence window has already closed. It’s consultation in name only.

  • Informed gets truncated.
    Transparency tends to stop at the top. Stakeholders “in the know” are often executives, while the employees most impacted by AI changes are left guessing.

AI doesn’t just stretch the RACI model—it exposes its fault lines. We’re not just dealing with a new toolset. We’re navigating a new terrain of decision-making, risk, and shared responsibility. And legacy frameworks weren’t built for that.

RACI can still serve as a map. But without adaptation, it misleads more than it guides.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Dee McCrorey, A Bridge to AI
Publisher Privacy ∙ Publisher Terms
Substack
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture