The public sector has distinctive needs (auditability, transparency, evidence-based policy) that make explainable AI not just desirable but essential.
When a bank’s AI system declines a loan application, the customer can take their business somewhere else. Things are different when a government’s AI system influences a decision about a child’s welfare, a benefit entitlement, or a prison sentence. The stakes are of an entirely different order. There is often no alternative provider. There is no competitive market to correct the error. There is only the state, its decision, and a citizen who has a right to know why.
This is why the question of whether AI can explain its reasoning, which matters in every sector, matters most of all in the public sector. And it is why causal AI, which models genuine cause-and-effect relationships rather than statistical patterns, deserves serious attention from public sector leaders and from the private sector organisations that collaborate with them.
The need to make things clear
Democratic governance rests on a straightforward principle. Decisions that affect citizens must be explicable. Not merely recorded. Not merely auditable after the fact. They should be clear and able to be understood, questioned, and supported in a way that a normal person can grasp.
Most AI systems deployed today cannot meet this standard. They find patterns in historical data and use those patterns to make predictions or flag anomalies. They can be extraordinarily exact. But when asked why they reached a particular conclusion, they have no meaningful answer to give. The reasoning, such as it is, consists of statistical importance distributed across thousands or millions of parameters. It is not reasoning that a policymaker, a legislative committee, or a court could easily question or understand.
For a long time, this limitation was largely theoretical. An interesting concern for researchers and ethicists, but not an operational problem. That is changing. Governments across members of the OECD are deploying AI into areas where the inability to explain a decision is not just an inconvenience but a democratic failure. According to the OECD, 67% of member countries are now using AI to improve public services. Spanning functions from healthcare allocation to fraud detection to environmental regulation[1]. As the range of decisions touched by AI increases, so does the exposure created by systems that cannot explain themselves.
The difficulty is not hypothetical. In the United Kingdom, the Department for Science, Innovation and Technology (DSIT) has reported that only 8% of government AI projects demonstrate measurable benefits[2]. This is a figure that reflects not just technical immaturity but the difficulty of embedding AI into decision-making processes that need to be clear and responsible. Across 14 countries surveyed by Deloitte, 78% of public sector leaders reported struggling to measure the impact of AI initiatives, a proportion significantly higher than in the private sector[2]. The problem is not that government lacks ambition for AI. It is that the kinds of AI most readily available do not fit the governance context in which the public sector operates.
Why correlation is not enough for policy
The challenge runs deeper than explainability alone. Public sector policymaking is fundamentally a causal exercise. When a senior public sector leader asks whether a proposed intervention will reduce reoffending, improve educational outcomes, or cut hospital waiting times, they are asking a causal question. Will this action produce this effect? Predictive analysis can show links between repeat offenses and demographics, or that more health visitors improve child health. But it can’t confirm if changing these factors will alter results, or if something else causes the patterns.
This is precisely the gap that cost the hotel chain millions in an earlier article in this series – mistaking correlation for causation and investing in the wrong intervention. In the public sector, the consequences of the same mistake are measured not in wasted marketing budget but in misdirected public spending, ineffective programmes, and outcomes that fail to improve for the people who need them most.
Causal AI addresses this directly. By modelling the mechanisms through which one variable genuinely influences another. It enables a different kind of analysis. Not “what tends to happen alongside what?”, but “what would happen if we took this specific action?” This is the kind of question that policymakers need answered, and it is the kind of question that current, predictive, AI is not designed to address.
Consider the practical difference. A predictive model might identify that regions with higher police visibility tend to have lower crime rates. A policymaker might reasonably conclude that increasing police visibility will reduce crime. But a causal analysis might reveal that the regions with higher visibility are also wealthier, with lower unemployment and better-funded schools. And it is these underlying factors, not the police presence itself, that explain the difference. Acting on the correlation alone would lead to a policy that looks logical but achieves little. Acting on the causal analysis would redirect resources towards the interventions that drive the outcome.
Counterfactuals and accountability
One of the most valuable capabilities that causal approaches offer the public sector is counterfactual reasoning. The ability to ask what would have happened under different circumstances. In the public sector, this is not an abstract analytical exercise. It is the foundation of accountability.
When a public sector programme is evaluated, the core question is almost always counterfactual. Did the programme make a difference, or would the same outcomes have occurred without it? Would patients have recovered without the intervention? Would employment rates have improved regardless of the training scheme? These questions cannot be answered by looking at what happened. They can only be answered by modelling what would have happened in the absence of the action, precisely what causal AI is designed to do.
A Harvard-affiliated research team has demonstrated this approach in public health, using counterfactual modelling to identify the genuine drivers of childhood disease in Pakistan. By separating causal factors from correlated ones, the analysis directed limited intervention resources towards the factors that influence disease spread, rather than those that merely appear alongside it[3]. When public health budgets are constrained, as they invariably are, the difference between targeting a cause and targeting a correlate is the difference between an effective programme and a well-intentioned one that achieves nothing.
Fraud detection offers another illustration. Public sector audit bodies are applying causal analysis to distinguish genuine fraud from benign anomalies in areas such as healthcare claims and benefit applications. Conventional systems flag patterns that deviate from the norm, generating large volumes of false positives that consume investigative resources. Causal models go further, identifying which anomalous patterns genuinely indicate fraud and which are explained by legitimate factors – focusing investigation where it will find wrongdoing[3].
The rules are changing
The regulatory environment is reinforcing the case for causal approaches in the public sector. The EU AI Act imposes explicit requirements for transparency, explainability, and human oversight in high-risk AI applications. This is a category that encompasses many of the use cases most relevant to the public sector, from social benefit allocation to law enforcement to critical infrastructure management.
Research compiled by Fortune Business Insights, drawing on European Commission data, found that AI regulations across Europe reference causal inference over 2,200 times[4]. This is not incidental. Regulators are signalling that the ability to explain cause-and-effect reasoning, not merely to make predictions, is becoming a baseline expectation for AI deployed in important settings.
For public sector organisations, this creates both an obligation and an opportunity. The obligation is clear. As regulatory requirements tighten, AI systems that cannot explain their reasoning will become increasingly difficult to deploy lawfully in high-risk public sector situations. The opportunity is that causal approaches are fundamentally better aligned with these requirements. Because causal models include explicit reasoning mechanisms, showing how one thing leads to another, they can provide the audit trail that regulators and citizens expect. They can show why a recommendation was made, in terms of identified cause-and-effect relationships, rather than presenting a prediction without explaining why.
The honest picture
None of this means that causal AI is ready for universal deployment across the public sector. The technology is still maturing, and the public sector faces particular challenges in adopting it. Talent is scarce. Building and maintaining causal models has historically needed deep expertise in causal inference, a skill set in short supply even in the private sector. Data quality is still a persistent obstacle. Causal models are demanding in their requirements for well-structured data with clear provenance, and many public sector data environments are still fragmented across departments and legacy systems.
There is also the broader context of the public sector AI maturity to consider. When only 8% of public sector AI projects show measurable benefits, the case for adopting a more technically demanding form of AI requires careful framing. The answer is not to pursue causal AI as yet another technology initiative bolted onto existing structures. It is to recognise that the distinctive requirements of public sector decision-making (explainability, auditability, evidence-based reasoning) are precisely the requirements that causal approaches are designed to meet.
The starting point for most public sector organisations will not be a wholesale platform deployment. It will be identifying specific, high-value decision-making situations where understanding why would materially improve the quality of policy or operational decisions. Where would counterfactual analysis change how a programme is evaluated? Where would root cause analysis redirect resources towards interventions that actually work? Where would the ability to model the effects of a proposed policy, before implementing it, reduce the risk of costly failure?
Is democracy working well?
The deeper argument for causal AI in the public sector is not technical. It is democratic. Citizens have a right to understand the reasoning behind decisions that affect their lives. Regulators are increasingly insisting on it. And the complex, multi-causal nature of the problems the public sector exists to solve – poverty, health inequality, environmental degradation, crime – demands analytical approaches that can distinguish genuine drivers from statistical noise.
Predictive AI has been valuable for the public sector and will continue to be so. But for the decisions that carry the greatest weight, the ones that distribute resources, shape policy, and affect people’s lives, prediction without explanation is not enough. The public sector needs AI that can say why.
This is the fourth in a series of articles exploring Causal AI in the enterprise.
References
[1] OECD, “AI in Public Service Design and Delivery.”, 2025, https://www.oecd.org/en/publications/governing-with-artificial-intelligence_795de142-en/full-report/
[2] OECD, “Implementation Challenges That Hinder the Strategic Use of AI in Government.”, 2025, https://www.oecd.org/en/publications/governing-with-artificial-intelligence_795de142-en/full-report/
[3] Acalytica, “Causal AI Disruption Across Industries (2025–2026).”, 2025, https://acalytica.com/blog/causal-ai-disruption-across-industries-2025-2026
[4] Fortune Business Insights, “Causal AI Market Size, Industry Share | Forecast, 2026–2034.”, 2026, https://www.fortunebusinessinsights.com/causal-ai-market-112132

Leave a comment