top of page

AI Governance – When You’re Accountable for What You Can’t Explain

  • Writer: Chris Crowe
    Chris Crowe
  • 7 days ago
  • 3 min read

I wanted to share a perspective that has been forming as we spend more time with AI across our work and with clients. This is not about adoption. Most organisations have already moved past that point. It is about how well we understand and govern what is now being embedded into operations.


Across the market, there is a consistent pattern. AI is being deployed broadly, but governance capability is not evolving at the same pace. That gap is beginning to surface as a real structural issue.


The core tension is relatively straightforward.  To generate meaningful value from AI, organisations must give these systems a degree of autonomy. If they are constrained too tightly, they do not deliver much benefit. But as autonomy increases, the ability to fully observe and control how outcomes are produced begins to decline.


At the same time, accountability does not change.  The organisation remains responsible, legally, operationally, and reputationally, for what those systems do.  This creates a widening gap between accountability and control.


This dynamic is now being more widely recognised externally. Geoffrey Hinton, one of the pioneers of modern AI, has been quite direct in describing why this moment is different from previous technology shifts. He points to the speed and scale at which these systems operate:


“When you and I transfer information, we're limited to the amount of information in a sentence… These things are transferring trillions of bits a second.”


The implication is that we are not simply introducing new tools. We are introducing systems that operate at a scale and speed that challenge traditional oversight models.


He also highlights a second important distinction. These systems do not degrade in the way human expertise does:


“When you die, all your knowledge dies with you. With these things, if you've stored the connection strengths, you can recreate that intelligence.”


From a business perspective, this creates strong incentives to deploy AI more broadly and with greater autonomy, which is where the tension emerges.


Control and visibility are diverging. Accountability remains.
Control and visibility are diverging. Accountability remains.

Hinton also separates the risks into two categories: misuse by humans and risks associated with increasingly autonomous systems. The second category is less understood but more structurally important. As these systems become more integrated in workflows and begin influencing decisions, organisations may find themselves accountable for outcomes they cannot fully explain.


His analogy is a simple but effective one:


“If you want to know what life's like when you're not the apex intelligence, ask a chicken.”


While provocative, the point is practical. Decision-making environments are beginning to change in ways that are not always visible to leadership.  This is something we are starting to see more directly.


Many organisations now have multiple AI tools operating across different functions. These tools interact with internal data, influence workflows, and in some cases begin to shape how decisions are made. As this becomes more embedded, it becomes increasingly difficult to trace how a particular outcome was produced.


Traditional governance approaches, including policies, approvals, and model validation, remain necessary. However, they were not designed for systems that operate with this level of autonomy and interaction.


There is also a human dimension that we should not underestimate.


People tend to trust AI outputs more than they should. Over time, that can influence how decisions are made, how teams challenge assumptions, and how expertise is applied within the organisation. This is driving a more immediate issue around how judgment is exercised when systems become part of the decision-making process.  This is less about the technology itself and more about the operating model required to support it.


The question is not whether we adopt AI. That is already happening. The question is whether our operating model evolves at the same pace so that judgment, accountability, and these systems remain aligned.  Without that, there is a risk that we end up in a position where we are accountable for outcomes but cannot clearly explain how those outcomes were produced.


As we think about this more broadly, a few questions are worth considering:

  • Do we have a clear view of where AI is operating within the organisation today?

  • Can we explain how decisions are being influenced or made where AI is involved?

  • Are we comfortable with the level of autonomy currently in place?

  • Are we seeing changes in how teams rely on judgment versus system outputs?

  • If something were to go wrong, could we clearly explain what happened?


This is not a recommendation to slow down adoption. The productivity benefits are real, and the direction of travel is clear.  However, it does suggest that governance and operating model design need to evolve alongside it.


I will continue to bring forward perspectives on this as we refine our thinking and see how this develops across clients and the broader market.



For more practical insights on organisational transformation, follow CMBYND on LinkedIn and subscribe to our newsletter CMBYND Thinking.

bottom of page