Predictive Coding: How we experience the world

Drawing on the work of Karl Friston, Andy Clark, and Anil Seth, this framework reframes perception as a form of “controlled hallucination,” with profound implications for how we understand cognition, emotion, interoception, and the experience of being a conscious self.

For leaders and organisations, predictive processing offers a rigorous account of why mental models are so resistant to change, why diverse teams genuinely see more, and why organisational change so reliably triggers threat responses.

Understanding this framework does not merely explain behaviour; it suggests concrete, evidence-informed strategies for working with the predictive brain rather than against it.

We find the work on this topic fascinating and helpful, if a bit challenging and abstract too. We hope you do too.

“You are not seeing the world. You are seeing your brain’s best guess about it, and most of the time you cannot tell the difference.”

That sentence should unsettle you slightly, and if it does, good. What it’s getting at calls into question some of our core historic assumptions about how we make sense of the world and how we experience it. What it’s really getting at is the predictive processing framework, sometimes called predictive coding, which is one of the most consequential ideas in contemporary cognitive science. Its implications reach far beyond the laboratory. 

That quote asks us to reconsider something we take for granted every waking moment: that our perceptual experience of the world is a faithful record of what is actually out there. Spolier – It is not.

What you (and I and everyone else) experience is a construction, a sophisticated, multi-layered, continuously updated prediction generated by your brain on the basis of prior experience, contextual cues, and incoming sensory data. And the sensory data, it turns out, plays a rather more junior role in this process than most of us have been led to believe.

Let me be more explicit.

The sensory information we receive as humans plays a smaller role in how we experience the world than the internally generated predictions we make about how we will experience the world.

This is not a fringe idea. It sounds so kooky, right? It’s not.

Predictive processing has become, over the past two decades, one of the leading unifying frameworks in neuroscience and philosophy of mind. Its proponents include Karl Friston, whose free energy principle provides the mathematical foundations; Andy Clark, whose philosophical and cognitive science work has done more than perhaps anyone else’s to make the framework accessible and expansive; and Anil Seth, whose research on consciousness, interoception, and the “beast machine” has brought predictive processing into the public conversation with unusual clarity and style. Together, their work paints a picture of the mind that is at once deeply counter-intuitive and, once grasped, remarkably illuminating. It’s worth reading their books, though some get quite mathematical and difficult.

If the brain is fundamentally a prediction machine, then everything we know about perception, decision-making, bias, emotional regulation, and the experience of change needs to be reconsidered through that lens.

For those of us who work with people in organisations, the implications of these frameworks are substantial. We actually think these are the most influential and significant theories we’ve come across, but we can’t find a way to action them yet in any meaningful way in workplace interventions. If you or anyone is working in this area, please do get in touch as we’d love to learn more!

The Predictive Processing Framework: A Brain That Guesses First

The classical view of perception, still implicit in much popular understanding, runs something like this: sensory information arrives at the brain’s doorstep (light hits the retina, sound waves vibrate the eardrum, molecules land on the tongue), and the brain processes that information in a bottom-up fashion, building up a picture of reality from raw data. First come the pixels, then the edges, then the objects, then the meaning. Perception, in this account, is essentially a sophisticated form of data processing.

Predictive processing inverts this picture almost entirely. Rather than waiting passively for sensory input and then working out what it means, the brain is constantly, proactively generating predictions about what sensory signals it expects to receive. These predictions flow downward through the cortical hierarchy, from higher-level areas that deal in abstractions and models to lower-level areas that process raw sensory data. What flows upward, from the senses toward higher processing areas, is not the sensory signal itself but the prediction error: the difference between what the brain predicted and what actually arrived.

This is a profound shift.

In the predictive processing framework, the brain’s primary job is not to process incoming information. Its primary job is to minimise surprise, to reduce the gap between its predictions and reality. Perception, then, is what happens when prediction meets the world and the two are reconciled. As Andy Clark puts it, we are not so much perceiving reality as “surfing uncertainty,” riding the waves of prediction error and constantly adjusting our internal models to stay upright.

Karl Friston and the Free Energy Principle

The mathematical backbone of predictive processing comes from Karl Friston, a neuroscientist at University College London whose free energy principle has been described, with varying degrees of enthusiasm and exasperation, as a unified theory of the brain. Friston’s central claim is that all adaptive systems, biological brains included, can be understood as minimising a quantity called “variational free energy,” which, for our purposes, can be thought of as a formal measure of surprise or prediction error.

The free energy principle states that organisms maintain their existence by keeping themselves in expected states, by ensuring that their sensory experiences remain within the bounds of what their internal models predict. When prediction error (free energy) is high, the organism is in an unexpected state, and this is, from a biological perspective, potentially dangerous. A fish out of water is experiencing a great deal of prediction error. So is a human being who has just been told their department is being restructured (though admittedly the consequences are less immediately lethal).

There are, broadly, two ways to minimise prediction error.

  1. The first is to update your internal model to better match the incoming sensory data. This is perceptual inference: changing your beliefs about the world to accommodate new evidence.
  2. The second is to act on the world to make it conform to your predictions. This is active inference: changing the world to match your expectations. You expected the room to be warm; it is cold; so you turn on the heating. (There are also more profound versions of this: you think that in situations like this that you might touch a wall, so you need to touch a wall).

Both strategies serve the same fundamental goal of prediction error minimisation, and both are happening constantly, automatically, and largely outside conscious awareness.

Friston’s framework is mathematically dense and has attracted both fervent admiration and pointed scepticism (some critics argue it is so general as to be unfalsifiable, which is a serious charge in science). B

ut even those who find the full formalism unwieldy tend to accept that the core insight, that the brain is fundamentally in the business of generating and testing predictions, is both well-supported by neuroscientific evidence and deeply productive as a way of understanding cognition.

Andy Clark: Surfing Uncertainty

If Friston provides the mathematics, Andy Clark provides the philosophy, the cognitive science, and, crucially, the prose. Clark, a philosopher at the University of Sussex (and previously Edinburgh), has been the most influential voice in articulating what predictive processing means for our understanding of mind, self, and world. His 2015 book Surfing Uncertainty remains the definitive philosophical treatment of the framework, and his 2023 The Experience Machine extends the argument to the nature of conscious experience itself.

Clark’s central contribution is to show how the predictive processing framework reframes virtually every aspect of cognition.

  • Perception is not bottom-up data processing but top-down prediction constrained by bottom-up error signals.
  • Attention is not a spotlight that illuminates parts of the world but a mechanism for adjusting the precision weighting of prediction errors, determining which errors the brain treats as important and which it ignores.
  • Action is not a response to perception but is itself a form of prediction, an embodied expectation about how the body will move and what sensory consequences that movement will produce. (Action is a prediction…)

The concept of precision weighting deserves particular attention, because it is doing enormous theoretical work. Not all prediction errors are created equal. Some are highly informative (a loud bang in a quiet room) and should update your model immediately. Others are noisy and unreliable (a vague shadow in your peripheral vision during a thunderstorm) and should probably be downweighted. The brain manages this by assigning precision estimates to its predictions and to the incoming sensory signals. High-precision prediction errors drive rapid model updating; low-precision ones are effectively ignored. This mechanism, Clark argues, is what we experience as attention: the selective amplification of certain prediction errors and the suppression of others.

This has immediate implications for understanding why two people can look at the same situation and see genuinely different things.

  • It is not merely that they interpret the data differently.
  • It is that their brains, on the basis of different prior experiences and different precision weightings, are literally constructing different perceptual experiences from the same raw input.
  • Their predictions differ, and because perception is dominated by prediction rather than sensation, their experiences differ.
  • This is not a metaphor. It is, according to the predictive processing account, how perception actually works.

Pretty bonkers really. At a personal level I feel I experience changes in how I see the world based on my biological state too, in such a way as to imply variation in my precision weightings. For example, on days when I’m particularly tired, people look different to me (the general swath of humanity) than how they do when I’m well rested.

Anil Seth: Controlled Hallucination and the Beast Machine

Anil Seth, another neuroscientist at the University of Sussex (at least at the time of writing), has taken predictive processing in a direction that is at once more intimate and more unsettling. Where Clark focuses primarily on perception of the external world and the nature of cognitive processing, Seth turns inward, to the body, to emotion, and to consciousness itself.

Seth’s most famous formulation is that perception is a form of “controlled hallucination.” This is not a rhetorical flourish; it is a precise description of what the predictive processing framework implies. If what we perceive is primarily the brain’s top-down prediction, modulated but not determined by bottom-up sensory input, then our perceptual experience is, in a very real sense, a hallucination, one that happens to be constrained by reality. In ordinary, healthy perception, what we experience is the brain’s best guess, not a direct readout of the external world.

Again, what we’re saying here is that what we experience in the world is predominantly our prediction of what we will or should be experiencing in each moment of life. The actual sensory inputs we receive are less important than what we expect those sensory inputs to be telling us.

Seth extends this argument to interoception, the brain’s perception of the body’s internal states: heartbeat, breathing, gut sensations, temperature, hunger, pain. In his 2021 book Being You, and in his research with Friston on active interoceptive inference, Seth argues that the brain predicts the body’s internal signals in exactly the same way it predicts external sensory signals. Your experience of your own body, your felt sense of being alive, of having a body that is yours, is itself a predictive construction. The brain does not passively monitor the body’s signals; it actively predicts them, and it is this ongoing act of prediction that gives rise to what Seth calls the experience of “being a beast machine,” a self-sustaining, self-predicting biological organism. This is wild. It’s totally out there, but it feels reasonable at the same time, and links to the power of things like placebo drugs, and even to nocebos.

The implications for understanding emotion are particularly striking. On this account, emotions are not hard-wired responses triggered by specific stimuli (the classical view that there are basic emotions like fear, anger, and joy, each with a dedicated neural circuit). Instead, emotions are the brain’s predictive interpretations of interoceptive signals in context. A racing heart might be predicted as excitement in one context and anxiety in another. The physiological signal is the same; the brain’s predictive model, shaped by prior experience, expectation, and situational context, determines what you actually feel. This converges powerfully with Lisa Feldman Barrett’s theory of constructed emotion, which makes a similar argument from a different empirical starting point, and the combined weight of these two frameworks represents a serious challenge to folk psychological assumptions about the nature of emotional experience.

Seth takes this further still.

If the self is a prediction, if our sense of being a unified, continuous, embodied agent is the brain’s best model of its own organism, then the self is not a fixed entity that perception and emotion happen to. The self is something that the brain continually predicts into existence.

This is the “beast machine” thesis: we are, at the deepest level, self-predicting organisms, and consciousness itself is what it feels like to be a system that models its own existence. This idea that consciousness is what it feels like to process information has been posited by others before, including the physicist Max Tegmark.

The specific chapter on the Beast Machine theory in Seth’s “Being You” is particularly worth checking out.

The Bayesian Brain: Priors, Likelihoods, and Posteriors

Predictive processing is often described as a “Bayesian” framework, because its logic mirrors Bayesian probability theory. In Bayesian inference, you start with a prior belief (your existing model of the world), you encounter new evidence (sensory data), and you update your belief to produce a posterior (a revised model that integrates prior and evidence, weighted by their relative reliability).

The brain, on this account, is doing something functionally equivalent. Your prior beliefs, shaped by a lifetime of experience, development, and (at a deeper level) evolutionary history, generate predictions about what you will experience. When those predictions meet actual sensory input, the brain performs something like Bayesian updating, revising the model in proportion to the strength and reliability of the prediction error. If you have very strong priors (deeply held beliefs or expectations) and the incoming evidence is weak or ambiguous, your priors will dominate and you will perceive what you expected to perceive. If the evidence is strong and unambiguous and your priors are weak, the evidence will dominate and your model will update.

This has profound implications for understanding how beliefs persist in the face of contradictory evidence, a phenomenon that anyone who has tried to change someone’s mind (or their own) will recognise. Strong priors are, computationally speaking, difficult to shift. They require overwhelming, high-precision prediction error to overturn. Weak, ambiguous, or easily reinterpreted evidence will simply be absorbed into the existing model, confirming rather than challenging the prior belief. This is not irrationality; it is Bayesian optimality. Given the brain’s task of making sense of a noisy, ambiguous world, giving weight to accumulated experience is, most of the time, the right strategy. It just happens to also be the mechanism that underlies confirmation bias, motivated reasoning, and a great deal of organisational dysfunction.

How Prior Beliefs Shape Perception (Not Just Cognition)

It is worth emphasising that predictive processing makes claims not merely about how we think but about how we perceive. This is a stronger and more unsettling claim than it might initially appear. Most of us are comfortable with the idea that our beliefs influence our judgements and interpretations, that cognitive bias is a thing. We are rather less comfortable with the idea that our beliefs influence what we literally see, hear, and feel, that perception itself is biased.

But the evidence is substantial. Research on perceptual priors has shown that expectations reliably alter what people report seeing, hearing, and feeling, not merely what they think about what they see, hear, and feel. The classic examples are well known: the same wine tastes better from an expensive bottle; the same word is heard differently depending on the visual context (the McGurk effect); the same ambiguous image is seen as a rabbit or a duck depending on whether you have been primed to expect one or the other. These are not failures of reasoning applied after perception. They are perception itself being shaped by prediction.

In the context of organisations, this matters enormously. A leader who expects a team member to underperform does not merely interpret their work more negatively. They may literally perceive it differently, noticing errors that a more neutrally primed observer would miss, failing to register strengths that are plainly visible to someone with different priors. This is not cynicism or bad faith (or it need not be). It is the predictive brain doing what it does: generating experience from the intersection of expectation and evidence, with expectation typically holding the stronger hand.

Relevance to the Workplace: Prediction Error, Mental Models, and Organisational Life

If the predictive processing framework is even approximately correct, then several features of organisational life that are usually treated as puzzles or annoyances become not merely explicable but predictable. Let us work through the most significant.

Mental models and leadership perception. Leaders, like everyone else, see the world through their priors. Their accumulated experience, their training, their successes and failures all contribute to a predictive model that shapes what they perceive, what they attend to, and what they regard as important. This is not optional; it is how perception works. The danger arises when leaders mistake their model-dependent perception for objective reality, when they confuse “what I see” with “what is.”

We’ve worked with senior leaders who were genuinely baffled that their team saw a situation completely differently from them, not because the team was wrong or difficult, but because the leader’s strong priors were generating a perceptual experience that simply did not match what others were experiencing from their different vantage points.

Confirmation bias as a prediction mechanism. Confirmation bias, the tendency to seek and interpret evidence in ways that confirm existing beliefs, is not a flaw in an otherwise rational system. It is a natural consequence of Bayesian inference with strong priors. The brain is not broken when it confirms what it already believes; it is doing exactly what a prediction machine should do when confronted with ambiguous evidence.

Recognising this does not excuse confirmation bias, but it does suggest that combating it requires more than exhortation to “be more open-minded.” It requires the deliberate introduction of high-precision, hard-to-ignore prediction errors into the system, which is to say, it requires structures and practices that force contact with disconfirming evidence in ways that cannot be easily dismissed.

Diverse teams see more, literally. If perception is constructed from the intersection of prior experience and incoming data, then a team composed of individuals with genuinely different priors (different life experiences, disciplinary backgrounds, cultural contexts, demographic identities) will, collectively, perceive a richer, more multifaceted version of reality than a homogeneous team.

This is not the familiar business case for diversity repackaged in neuroscience jargon. It is a direct implication of the predictive processing framework. Different priors generate different predictions, which generate different perceptual experiences, which means that genuinely diverse teams are, in a meaningful sense, seeing things that homogeneous teams cannot.

The qualification “genuinely diverse” matters here, because diversity that is present on paper but suppressed in practice, where minority viewpoints are systematically ignored or penalised, gains nothing. The priors are different, but the prediction errors they generate are being downweighted to zero.

Why change feels threatening. Organisational change, particularly the kind that disrupts established routines, relationships, and expectations, is, from the predictive brain’s perspective, a sustained source of prediction error. The world is no longer behaving as expected. Familiar patterns have been disrupted. The models that used to work are no longer generating accurate predictions.

This is, in the free energy framework, a state of increased surprise, and the brain’s response to sustained surprise is stress, anxiety, and a powerful motivation to restore predictability, either by updating the model (adapting to the change) or by acting on the world to make it predictable again (resisting the change).

This explains something that change management practitioners have observed for decades but often struggle to articulate: why even obviously beneficial changes can provoke intense resistance.

The content of the change, whether it is “good” or “bad,” is less important than the prediction error it generates. A change that is objectively positive but profoundly unpredictable will feel more threatening than a negative but predictable status quo.

The brain does not optimise for objective goodness; it optimises for predictability. Understanding this does not make change easy, but it does suggest that attending to predictability, giving people as much certainty, structure, and advance notice as possible, is not mere “change communication.” It is working with the fundamental architecture of the human mind.

Stress as chronic prediction error. If we take the free energy principle seriously, then chronic stress can be understood as a state of sustained, unresolved prediction error, a condition in which the world persistently fails to match the brain’s models and the brain cannot either update its models sufficiently or act on the world effectively enough to close the gap.

This maps neatly onto what we know about the psychology of stress: uncertainty, lack of control, unpredictability, and the feeling that one’s coping resources are inadequate are all, in predictive processing terms, conditions of persistent high free energy. Burnout, on this account, might be understood as the exhaustion that comes from a system that has been trying and failing to minimise prediction error for too long.

Practical Steps for Individuals: Working With Your Predictive Brain

Understanding that you are a prediction machine is, in itself, a useful insight. We certainly find it to be so.

But the knowledge becomes more genuinely powerful only when it informs practice. The following are concrete, evidence-informed strategies for individuals who want to work more effectively with their own predictive processing.

1. Develop a practice of noticing your predictions

Before entering a meeting, a conversation, or any situation that matters, pause and ask yourself: what am I expecting to happen? What do I expect this person to say? How do I expect this to feel? Write it down if you can.

The act of making your predictions explicit, of dragging them from the background of automatic processing into the foreground of conscious awareness, is the first step toward being able to interrogate them. You cannot examine what you cannot see, and most of our predictions operate entirely below the threshold of awareness.

In our experience, people who do this regularly report being genuinely surprised by how strong and specific their predictions are, and by how often those predictions shape their experience in ways they had never noticed.

It’s also possible to do this as part of a reflective practice after the event, but it’s harder to do this honestly and accurately.

2. Practise deliberate prediction error

Actively seek out experiences that violate your expectations. Read authors you disagree with, not to debunk them but to genuinely engage with a different set of priors. Have conversations with people whose life experience is fundamentally different from your own. Travel to places where your cultural predictions do not apply. Take a different route to work.

The point is not to destabilise yourself but to keep your predictive models flexible and responsive, to prevent the kind of rigidity that comes from never encountering meaningful prediction error. Think of it as cognitive cross-training: deliberately exposing your predictive system to novel inputs so that it remains adaptable rather than brittle.

This is hard, and draining to do in practice. But it’s helpful and rewarding too.

3. Pay attention to your body’s predictions

Interoceptive awareness, the ability to notice and accurately interpret your body’s internal signals, is, on Seth’s account, itself a predictive skill. When you notice your chest tightening before a difficult conversation, that tightness is not just a physiological response; it is your brain’s prediction about the state of your body, informed by your model of what is about to happen.

Practising interoceptive awareness through mindfulness, body scanning, or simply pausing to check in with your physical state can improve the accuracy of these predictions and, crucially, give you a moment of space between the prediction and your response to it.

Techniques as simple as a daily two-minute body scan, paying slow, deliberate attention to sensations from your feet upward, can over time sharpen interoceptive precision and improve emotional regulation.

4. Reframe anxiety as prediction error, not as evidence of threat

When you feel anxious in a situation that is objectively not dangerous (a presentation, a performance review, a social gathering), your brain is generating high prediction error. It is predicting threat where there is none, or rather, it is weighting its threat priors more heavily than the incoming evidence warrants.

Recognising anxiety as prediction error, rather than as reliable evidence that something bad is about to happen, can create useful psychological distance. It does not make the anxiety disappear (the prediction is still running), but it changes your relationship to it. You move from “I am anxious because this is dangerous” to “my brain is generating a threat prediction that may not be accurate.” That shift, small as it sounds, can be surprisingly liberating.

5. Cultivate “precision flexibility”

One of the most practically useful concepts in predictive processing is precision weighting, the brain’s ability to amplify certain signals and dampen others. Rigid precision weighting, always treating the same kinds of signals as maximally important, leads to inflexible perception and behaviour. Flexible precision weighting allows you to adjust what you attend to based on context.

You can develop this flexibility through practices that train attentional flexibility: meditation that alternates between focused attention (high precision on a single object) and open monitoring (low precision, wide awareness); deliberately shifting your attention during meetings from the content of what is being said to the emotional tone, then to the body language, then back again; or practising what some contemplative traditions call “soft gaze,” allowing your attention to rest without fixating.

6. Keep a “prediction journal”

At the end of each working week, spend ten minutes writing down three predictions you made that turned out to be wrong. What did you expect? What actually happened? Where did the gap come from? This simple practice builds what metacognitive researchers call “calibration,” the ability to accurately assess the reliability of your own predictions.

Over time, you may notice patterns: perhaps your predictions about certain types of people are consistently inaccurate, or your expectations about how long tasks will take are reliably optimistic (or reliably wrong if you’re like me). These patterns are your priors made visible, and once visible, they become workable.

7. Use “prediction swaps” in difficult relationships

When you find yourself in a recurring conflict or misunderstanding with someone, try explicitly generating their prediction about the situation rather than your own. What are they expecting? What priors are they bringing? What prediction errors might they be experiencing?

This is not the same as empathy in the warm, emotional sense (though it may produce empathy as a byproduct). It is a cognitive exercise in modelling someone else’s predictive system, and it can reveal that what looks like stubbornness, hostility, or irrationality from the outside is, from the inside, a perfectly reasonable response to a different set of predictions.

Practical Steps for Leaders: Building Prediction-Aware Organisations

If individuals can learn to work with their predictive brains, leaders can learn to create environments that support adaptive prediction rather than entrenching rigid ones. The following are practical strategies grounded in the predictive processing framework.

1. Provide prediction scaffolding during change

Since change generates prediction error, and excessive prediction error generates stress and resistance, one of the most useful things a leader can do during any change process is to make the future as predictable as possible. This does not mean eliminating uncertainty (which is usually neither possible nor desirable). It means being explicit about what is known, what is not yet known, and when more information will be available. Provide timelines, even provisional ones. Identify the things that will not change. Create routines and rituals that persist through the transition. Every piece of predictability you provide reduces the prediction error load on your people’s brains, freeing up cognitive and emotional resources for the adaptive work of model-updating that the change actually requires.

2. Design for “safe prediction errors”

If learning requires prediction error (and it does; you cannot update a model that is never surprised), then leaders need to create environments where prediction error is experienced as informative rather than threatening. This is, in essence, psychological safety described in predictive processing terms.

  • In a psychologically safe environment, prediction errors (mistakes, unexpected outcomes, disconfirming feedback) are treated as valuable data.
  • In a psychologically unsafe environment, prediction errors are treated as evidence of failure, incompetence, or threat.

The brain’s response will be accordingly different: in the first case, model-updating and learning; in the second, defensive model-preservation and rigidity. Leaders who want adaptive, learning organisations need to create conditions where being wrong is safe, because being wrong is the only way the predictive brain learns.

3. Audit your own priors regularly

Leaders’ mental models have disproportionate influence on organisational reality because leaders’ perceptions shape decisions, and decisions shape the environment that everyone else inhabits. This makes it essential, not optional, for leaders to regularly examine their own priors:

  • What assumptions am I making about this team, this market, this strategy?
  • What evidence would change my mind?
  • When was the last time I encountered evidence that genuinely surprised me, and what did I do with it?

Some leaders run an occasional “assumption audit”, systematically listing their key assumptions about the business and then actively seeking evidence that might disconfirm them. It is not a comfortable exercise, but it is a remarkably effective antidote to the natural tendency of strong priors to ossify into unexamined certainty.

4. Leverage cognitive diversity as a prediction error generator

The predictive processing framework provides a rigorous rationale for cognitive diversity that goes beyond the usual business case. A team with diverse priors will generate more and different prediction errors when encountering the same situation, which means the team as a whole has access to a richer, more nuanced model of reality.

However, this only works if those diverse prediction errors are actually heard and integrated. Leaders need to create structures, not just cultural aspirations, that ensure minority viewpoints are surfaced and taken seriously. This means things like structured dissent processes, rotating the role of “designated challenger” in meetings, and actively soliciting the perspective of the quietest person in the room rather than defaulting to the loudest (which is worth remembering next time you notice who dominates your team meetings and who stays silent).

5. Communicate in ways that work with predictive processing, not against it

When communicating complex or unwelcome information, consider what your audience’s brains are predicting. If your message radically departs from what people expect, the prediction error will be high and the defensive response will be correspondingly strong.

This does not mean you should avoid delivering difficult messages. It means you should prepare the ground: signal what is coming, provide context that allows people to begin updating their models before the main message arrives, and follow up with concrete information that helps the brain build a new, stable model. The worst thing you can do, from a predictive processing perspective, is to drop a bombshell and then go silent, leaving people’s brains to generate predictions from anxiety rather than information.

6. Create “model-updating” rituals

Build regular practices into your team’s routines that explicitly support the updating of mental models. After-action reviews, pre-mortems, structured reflection sessions, and “what surprised us this week” discussions all serve this function.

The key is to make these events genuinely reflective rather than performative. A pro-forma after-action review where everyone agrees things went well teaches the predictive brain nothing. A structured review where the team is asked “what did we predict would happen, what actually happened, and what does the gap tell us?” actively engages the prediction error minimisation system in the service of collective learning.

7. Attend to interoceptive signals in your organisation

Seth’s work on interoception suggests that the body’s internal signals are a significant, often overlooked, source of prediction. In organisational terms, this means that the “gut feelings” people report about a project, a hire, or a decision are not irrational noise to be dismissed. They are the output of predictive models that integrate information too complex or too subtle for conscious, verbal analysis.

This does not mean gut feelings are always right (they reflect priors, which may be outdated or biased). But it does mean they are information, and a leader who systematically ignores the interoceptive signals of their team is throwing away data. Create space for people to articulate these felt senses. Ask “what does your gut tell you about this?” alongside “what does the data tell you?” and treat both answers as worthy of examination.

8. Reduce unnecessary prediction error in the everyday environment

Not all prediction error is productive. Much of the stress and friction in organisations comes from unnecessary unpredictability: unclear expectations, inconsistent standards, shifting priorities, surprise decisions, and the general sense that the rules of the game keep changing without warning.

Leaders can reduce this background prediction error simply by being more consistent, more transparent, and more explicit about expectations. This is not about being rigid or controlling. It is about providing a stable predictive scaffold within which people can direct their cognitive resources toward the prediction errors that actually matter, the creative, challenging, generative kind, rather than wasting them on wondering whether this week’s priorities are the same as last week’s.

9. Use “belief updating” conversations in coaching and development

When working one-to-one with team members, particularly around performance or development, frame conversations explicitly in terms of beliefs and evidence. Rather than telling someone they are wrong about something, ask what evidence supports their current belief, what evidence might challenge it, and what would need to be true for them to update their model.

This approach respects the architecture of the predictive brain, which updates beliefs through evidence rather than instruction.

We’ve seen this approach transform development conversations from adversarial debates into genuine collaborative inquiry, because it treats the other person’s mental model as a reasonable product of their experience rather than as an error to be corrected. Of course, to do this well you also need trust and all of that good stuff.

Criticisms and Limitations

No framework, however elegant, should be adopted uncritically, and predictive processing has attracted serious and legitimate criticism.

The most fundamental objection is that the framework may be too general to be falsifiable. If every aspect of cognition can be described in terms of prediction error minimisation, then what would count as evidence against the theory? This is the charge that has been levelled particularly at Friston’s free energy principle, which some critics argue is less a scientific hypothesis than a mathematical tautology: true by definition rather than by empirical test. Jakob Hohwy, in The Predictive Mind, has engaged thoughtfully with this concern while arguing that the framework does generate specific, testable predictions, but the debate is far from settled.

A second concern is the risk of “brain-centrism,” the tendency to explain everything in terms of neural computation while neglecting the role of the body, the environment, and social context. Clark himself is alert to this risk and has argued extensively for an “extended” version of predictive processing that includes environmental scaffolding and cultural practices. But there remains a tension between the framework’s emphasis on internal generative models and the reality that much of human cognition is embedded in, and distributed across, bodies, tools, social relationships, and cultural practices.

Third, the relationship between predictive processing and consciousness remains deeply contested. Seth’s work has made the most ambitious claims here, arguing that consciousness is what it feels like to be a certain kind of predictive system. But many philosophers and neuroscientists remain unconvinced that prediction error minimisation, however good an account of perception and action it may be, can actually explain why there is subjective experience at all. The “hard problem” of consciousness, as David Chalmers framed it, is not obviously solved by even the most sophisticated predictive model.

Fourth, there are practical limitations to the Bayesian brain metaphor. Real brains do not have access to the true prior probabilities and likelihoods that Bayesian inference requires. They work with approximations, heuristics, and learned associations that may only loosely resemble formal Bayesian computation. Whether the brain is “really” doing Bayesian inference or merely doing something that can be usefully described in Bayesian terms is an open question with significant implications for how literally we should take the framework’s predictions (if you will pardon the recursion).

Finally, there is a risk, particularly when the framework is applied to organisational contexts, of over-simplification and false certainty. The leap from “the brain minimises prediction error” to specific prescriptions for leadership and change management involves a great many intermediate steps, each of which introduces uncertainty and interpretation. The practical applications suggested above are, we believe, well-grounded in the framework, but they should be held as useful heuristics informed by good science rather than as iron laws derived from it.

Learning More

To go deeper, explore these resources on people-shift.com:

The People Shift View

Love it. We find predictive processing one of the most genuinely illuminating frameworks to have emerged from cognitive science in recent decades, not because it answers every question, but it set out a lovely framework and because it asks some good questions.

If the brain is a prediction machine, then the quality of our thinking, perceiving, and leading depends not on the accuracy of our sensory equipment but on the quality of our models, and on our willingness to let those models be updated by reality rather than defended against it. It’s about what we project out into the world, not what we bring internally from the world. The metaphor of the camera vs the projector is sometimes used to explain this distinction. 

What strikes us, having worked with leaders and teams across a range of sectors and countries, is how rarely people are given the language or the conceptual tools to examine their own predictions. Let alone the language and tools to realise that what they experience is fundamentally a product of what we predict we’ll experience.

We operate, for the most part, inside models we have never made explicit, mistaking our brain’s best guesses for unmediated reality. And most people don’t know they live inside this constructed world – they think they live inside a “true” reality, and more unfortunately, that everyone shares their true reality.

The predictive processing framework does not eliminate this problem, but it does help to make these challenges visible. It provides language, tools and metaphors to discuss it too. Creating this visibility can be a great first step in loosening the power of some of the stronger models that shape the way we see the world, particularly given that most of us don’t even know that we’re seeing the world through these models (or, more accurately, projecting the world through them). 

Our encouragement is straightforward. Treat your certainties with curiosity. Seek out the prediction errors you would normally avoid. Build environments where being surprised is safe rather than shameful. And remember that the world you perceive is not the world as it is, but the world as your brain expects it to be.

This is a very scary thing. But it’s also a very wonderful thing. 

Sources and Feedback

Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181-204.

Clark, A. (2015). Surfing Uncertainty: Prediction, Action, and the Embodied Mind. Oxford University Press.

Clark, A. (2023). The Experience Machine: How Our Minds Predict and Shape Reality. Pantheon.

Seth, A. K. (2021). Being You: A New Science of Consciousness. Faber & Faber.

Seth, A. K., & Friston, K. J. (2016). Active interoceptive inference and the emotional brain. Philosophical Transactions of the Royal Society B, 371(1708).

Friston, K. (2010). The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 11(2), 127-138.

Barrett, L. F. (2017). How Emotions Are Made: The Secret Life of the Brain. Houghton Mifflin Harcourt.

Hohwy, J. (2013). The Predictive Mind. Oxford University Press.

We’re a small organisation who know we make mistakes and want to improve them. Please contact us with any feedback you have on this post. We’ll usually reply within 72 hours.