NHS Data and AI: Building Innovation on Trust

NHS Data and AI - Building Innovation on TrustImage | AdobeStock.com

When Aneurin Bevan established the NHS in 1948, amid post-war austerity, he is said to have remarked that he had to “stuff the doctors’ mouths with gold” to overcome professional resistance. Fast-forward 75 years through successive waves of reform—from Thatcher’s internal market experiments to New Labour’s foundation trusts with later Lansley reforms—and once again we stand at a critical juncture in healthcare, with AI promising transformation. However, the currency for buy-in today isn’t gold. It’s data.

The Labour government’s recently announced “AI Opportunities Action Plan’”sets out an ambitious roadmap to make the UK an “AI superpower”. The healthcare industry is positioned as a key beneficiary and the NHS is expected to play a central role, supported by initiatives such as a proposed national data library. While the plan holds real promise, it also raises significant concerns. The biggest among them: how will sensitive NHS data be handled, who gets access, and how do we ensure the public remains confident in a system increasingly shaped by a “hidden process” of AI?

Potential and pitfalls

There’s no doubt that AI can improve patient outcomes and operational efficiency. Initiatives by NHS England already use AI to identify patients at risk of frequent emergency visits, enabling earlier intervention. Nevertheless, scaling these solutions demands access to vast quantities of health data — a shift that risks undermining public trust and data security if not handled with the utmost care.

Unlike traditional healthcare improvements, AI demands a new kind of contract with the public — one where data is the critical input, and trust is the foundation. If people feel their data is being misused or commercialised without consent, they may withhold vital information, which could lead to undermining both diagnosis and care.

We’ve already seen how data misuse can damage trust. The now-defunct Care.data programme collapsed in 2016, joining a long lineage of NHS digital initiatives from the £12 billion abandoned National Programme for IT to the troubled NHS Digital Strategy—each faltering when reforms prioritized technical solutions over democratic engagement. We can’t afford to repeat these cyclical mistakes that have characterized NHS modernization efforts since the 1990s.

Ethical data use is non-negotiable

AI introduces new layers of complexity into healthcare decision-making. Lacking clear governance creates a danger that private firms might gain monopolistic advantages, using exclusive access to NHS data to train proprietary models. If there is no designated public service, appropriate level of government oversight, or direct benefit to the public, these systems can operate as black boxes making decisions without transparency or accountability.

This is particularly concerning as private investment in AI infrastructure ramps up. The recent £14 billion committed by companies such as Vantage Data Centres and Kyndryl echoes earlier watershed moments in NHS history—from the introduction of General Management under Roy Griffiths in 1983 to the Private Finance Initiative of the 1990s—where private sector logics reshaped public healthcare. These investments could profoundly shape the future of UK healthcare delivery, but only if guided by rigorous data ethics and transparency standards that learn from this complex history of public-private entanglement. Ethical, patient-centric data governance isn’t a barrier to innovation. It is the foundation that makes it sustainable.

Patient control and trust in AI is key

The success of AI in the NHS hinges not only on accuracy and efficiency but on public trust. Patients must be empowered to control their data at every step of the process to understand what information is collected, why, and who has access.

Europe is already moving in this direction through initiatives like the European Digital Identity Wallet (EUDI Wallet) under eIDAS 2.0, which puts data control back in the hands of individuals. The NHS has the potential to do the same, evolving the NHS App into a fully-fledged digital health wallet where patients can grant, and revoke, access based on need and explicit consent.

This is where decentralised technologies, like the W3C Solid protocol, come in. Developed by web inventor Sir Tim Berners-Lee, Solid enables individuals to store their personal data in secure “Agentic Wallets” retaining full control over how it’s accessed and by whom. It’s a technical model designed to resist the extraction-based data economy that has dominated the tech world for the last two decades.

Far from slowing progress, this approach enhances it — by improving data quality, reducing systemic risks, and increasing patient engagement. It mirrors what we already know about good healthcare: outcomes improve when patients feel informed, respected, and in control.

Proven success in the UK

This isn’t a theory. A data-sharing pilot in Greater Manchester that gave clinicians secure access to patient-held records has improved care for vulnerable groups, including people with dementia. This model of secure, patient-controlled data sharing should serve as a blueprint for national policy.

Brigitte West, Product Director at DrDoctor, rightly pointed out that Labour’s plan is “much-needed recognition that AI can be used for operational reasons – so that clinical staff can spend less time on admin and more time delivering care.”

But to fully realise this, AI systems must be designed not just for efficiency but for fairness, accountability, and trust.

Avoiding the trap of “Big Data” thinking

There’s a long-standing temptation in health tech to centralise everything — to build ever-larger repositories under the banner of efficiency. But in reality, centralisation can restrict access, increase security risks, and lead to long-term dependence on external vendors.

Decentralised, standards-based systems like Solid allow for more scalable, secure, and flexible data use. Each patient’s data lives in their own secure wallet, and clinicians access only what’s necessary with explicit consent and clear audit trails. This design dramatically reduces the risk of mass data breaches while increasing transparency and control.

It’s time to move past the idea that innovation means building bigger silos. Real innovation means rethinking the structure altogether by creating a system where data works for people, not the other way around.

A future built on trust

For AI to succeed in the NHS, it must be understandable, explainable, and accountable. Patients need to know how AI arrives at recommendations and be able to trust solutions. Clinicians need the ability to interrogate and challenge its decisions in real-time without negative impacts to patients. That’s how we ensure AI complements, rather than replaces, human expertise in healthcare.

Labour’s “Plan for Change” could indeed deliver a modernised healthcare system. But it must be built on a foundation of trust, not just technology. As we move forward, the term “national data library” must be more than a metaphor — it should reflect a distributed, accessible, user-controlled system where data is borrowed with permission and returned with respect.

Just as the NHS revolutionised access to care in the 1940s—emerging from the crucible of war and the Beveridge Report’s vision of combating the ‘five giants’ of Want, Disease, Ignorance, Squalor and Idleness—we now have a chance to revolutionise how data underpins that care while remaining faithful to those founding principles that have weathered decades of political contestation. If we get it right, we can unlock the full potential of AI in healthcare — while staying true to the NHS’s founding values of fairness, equality, and public good.

By Davi Ottenheimer, VP of Trust and Digital Ethics, Inrupt