
The Revolution That Has Not Come
I have repeatedly asserted in my previous blogs that the information revolution has the power to utterly transform healthcare. This belief has driven my career choices and how I spend my time – now dividing it equally between clinical practice and digital transformation. But as I have also observed – the revolution has been slow to come. Notwithstanding the now almost ubiquitous deployment of electronic health information systems and the hubris surrounding AI – it does not feel like the ‘digital agenda’ has delivered the benefits it promised at the outset. Why is that?
It’s the Operating Model – Stupid
Digital has been top of the ‘agenda’ for well over 20 years in healthcare. There hasn’t been an NHS Plan where digital has not been a key pillar for its delivery – including this government’s latest 10-year plan. But despite billions of pounds having been spent – resulting in the deployment and re-deployment of ever more sophisticated technology in the health setting – the way healthcare is delivered has not fundamentally changed. The doctors joining the equivalent training schemes today that I joined 30 years ago are doing exactly the same job that I did in more or less exactly the same way – albeit recording it on a computer instead of on paper. So are the nurses and allied health professionals. A patient’s experience of care is identical – the care settings (GP surgeries, outpatient departments, wards and operating theatres) the same – as well as the cumbersome booking process required to access them (possibly even worse).
I am currently presiding as CCIO over a major deployment of an EPR across a three-hospital system in the south of the UK. The task of reconciling three separate hospitals’ operating processes and bringing about agreement on convergence of clinical pathways and practice is phenomenally complex. The task is littered with wicked problems – interestingly more of these occupy the operational sphere than the clinical sphere. We have benefited from decades of evidence-driven clinical practice and national guidance which means there is fairly broad agreement on clinical pathways. How you run your hospital is an evolved phenomenon driven by the history and geography of a particular institution. Codifying all this in a shared EPR is possible – but takes time, money and infinite patience.
This is the core of the problem – digital hasn’t delivered because we have digitised a flawed operating model. The operating model for the delivery of healthcare is riven with discontinuities – because it is designed around institutions. Our operating managers and professionals spend a huge amount of time smoothing over these discontinuities – shepherding patients through the system. Sometimes with digital systems helping and sometimes with them getting in the way.
We need to re-design our operating model with the only agent of continuity in the system at its heart – the patient.
How Radical Is That?
What does a patient-centred operating model actually look like? Notwithstanding the often-repeated mantra of ‘patient centred care’ – this is, in our current system, a mere slogan to which institutions pay lip service but make no efforts to design themselves around. Our current operating model is firmly institution-based, and within those institutions, service-based. The accountability, flow of money, performance metrics and digital systems all serve to entrench the siloed operating model – leaving the hapless patient to navigate the dangerous gaps between.
Not only does the patient have to navigate the gaps – but they do so having left vital information behind – the institutions having confiscated their information and locked it away in digital vaults or even physical ones. The rules of confidentiality and GDPR prevent those institutions sharing that information even when it is in the patient’s interests to do so. And those same institutions guard that information jealously even from the patients themselves – fostering ignorance and dependence.
This has to change. If we are to bring about the left shift of secondary to primary and primary to self-care – then we need to place the enablement of self-care as the preeminent goal of the operating model. This starts with placing information and knowledge in the hands of patients. We have to upturn the institution/patient hierarchy – making the providers of healthcare servants to the empowered patient.
The Digital Personal Health Record
The spine of the patient-centred operating model would be a digital one – the Digital Personal Health Record (DPHR). It is something we will all have – whether we are patients or not. A bit like a bank account – whilst hosted in the cloud there would be a strong sense of personal ownership of the DPHR – you would know what’s in it, you would give permission to others to add to it and you would decide who has access to see its content. It would be a longitudinal record of every healthcare transaction you experience on your lifelong healthcare journey. It would follow you to every healthcare setting you encounter – both public and private, hospital and community. You would have the power to give access to any service or agent that you feel adds value to you.
And this is where the power of our upturned operating model is truly unleashed – because in the hands of patients the detailed granular health information will be exposed to a new generation of health IT powered by AI. The doctor in your pocket will become a reality – and that doctor will soon know you better than you know yourself. That doctor in your pocket will have insights from access to millions of health records and billions of health transactions that no human doctor could possibly have. That doctor in your pocket will empower you to care for yourself.
Now that is radical.

But…
There will be strong resistance to this notion, counter-arguments and powerful cases made to maintain the status quo. Can patients be trusted to keep their own health record, make wise decisions and treat themselves? How do we assure the safety of such a system? How could such a model assure equity of access and be trusted not to leave the vulnerable and disadvantaged out in the cold? What does this mean for the human professionals required to deliver healthcare? And what does it mean for our institutionally anchored NHS? These objections deserve to be met seriously, and one by one.
Patients doing it for themselves
Perhaps the most instinctive objection to the DPHR is the paternalistic one: can patients really be trusted to manage their own health records, make sound decisions, and take responsibility for their own care? The honest answer is that they can – and moreover, that they should. The assumption that patients cannot be trusted is itself a product of the institution-centred model we are trying to dismantle. Healthcare professionals are not, by virtue of their training, reliable arbiters of a patient’s best interests; we bring our own biases, heuristics, and blind spots to every clinical encounter. Patients, by contrast, are the only agents with continuous, lived knowledge of their own bodies, priorities, and circumstances. What patients have historically lacked is not the capacity for wise decision-making, but access to the information and tools that would enable it.
This is precisely where AI changes the calculus. A well-designed AI – drawing on the depth and breadth of the DPHR, and trained across millions of comparable health journeys – could support patients to understand their condition, weigh their options, and navigate the system with confidence. Crucially, it would also know when to refer: directing patients to human professionals not for every query, but for those that genuinely require clinical judgement, complex intervention, or human connection. The result would be a healthcare workforce liberated to do what only humans can do – while the vast middle ground of routine monitoring, minor illness, and health maintenance is handled closer to the patient, by the patient, with intelligent support. This is not a diminishment of professional expertise; it is a better use of it.

Taming the jungle
A second, and entirely legitimate, objection concerns safety – in two distinct but related senses. The first is informational security: a lifelong, granular health record held in the cloud represents an extraordinarily valuable and sensitive asset. The risks of breach, misuse, or exploitation are real. Yet this challenge is not unique to healthcare, and it is not insurmountable. Banking, legal services, and government have all developed robust frameworks for protecting highly sensitive personal data in digital environments. The DPHR would demand the same rigour – strong encryption, patient-controlled permissioning, and clear legal liability for those who misuse access – but the technology and regulatory architecture to achieve this already exists in embryonic form.
The second and perhaps more complex safety question concerns the quality of the system’s outputs: how do we know the AI is giving good advice? How do we protect patients from the digital equivalent of the quack – the rogue application, the poorly trained model, the commercial interest dressed as clinical guidance? Here, regulation must evolve to meet the technology. Just as medicines are licensed, clinical devices certified, and professionals registered, AI health tools operating within the DPHR ecosystem would need to meet defined standards of evidence, transparency, and accountability before being granted access. An independent regulatory body – akin to the MHRA but for AI-driven health applications – would be responsible for approving tools, monitoring their outputs, and withdrawing accreditation where safety signals emerge.
Critically, the architecture of the system itself should generate the data needed for this oversight: every recommendation made, every referral prompted, every outcome recorded becomes part of a continuous audit trail. In this sense, the DPHR ecosystem could, over time, be safer than the current model – not despite its scale, but because of it. The signal-to-noise ratio in a system processing millions of health journeys simultaneously would make the detection of harmful patterns faster, and the correction of them more decisive, than anything achievable through today’s fragmented and largely opaque clinical landscape.
Justice and Equity
The equity objection is perhaps the most difficult to dismiss, and it deserves to be met with honesty rather than optimism. There is an uncomfortable truth at the heart of the digital agenda: the benefits of digitisation have not been evenly distributed. Across healthcare, financial services, and public administration, the shift to digital-first delivery has quietly but systematically disadvantaged those who are older, less educated, less affluent, or living with cognitive or sensory impairment. A model that places the DPHR at the centre of healthcare delivery risks amplifying these existing inequalities – creating, in effect, a two-tier system in which the digitally confident thrive while the most vulnerable are left further behind. This risk must be named, designed against, and actively monitored – not treated as an edge case to be solved later.
And yet the relationship between digital technology and disadvantage is not straightforwardly one of exclusion. For many people who struggle to navigate traditional, institution-based systems – whether because of anxiety, language barriers, disability, distrust of authority, or simply the friction of booking an appointment and sitting in a waiting room – digital tools can be profoundly empowering. The same paternalistic structures that fail patients in general tend to fail marginalised patients most of all. A well-designed DPHR, with accessible interfaces, multilingual support, and the ability to delegate trusted access to a carer or advocate, could lower barriers that our current system raises.
The honest position, then, is this: digital inclusion is a precondition of this model, not an afterthought. The rollout of the DPHR would need to be accompanied by serious, sustained investment in digital literacy and access – and the design of the system itself must centre the needs of the most vulnerable from the outset, not retrofit them at the end. The goal is not a healthcare system that works brilliantly for the many and abandons the few. It is one that works better for everyone, and transformatively better for those our current system serves least well.
The New Model Doctor
No transformation of this scale could leave the healthcare workforce untouched, and it would be dishonest to pretend otherwise. The question is not whether the roles of healthcare professionals will change, but how – and for whom the change will be most profound. The answer, perhaps counterintuitively, is that those whose work is most physically grounded are likely to be most resilient. The nurses, paramedics, physiotherapists, and surgeons whose value lies in what their hands, eyes, and presence can do – the irreducibly human acts of physical examination, surgical intervention, wound care, and compassionate touch – will remain indispensable in ways that no AI system can replicate. The ‘doers’ of healthcare are, in this sense, relatively protected.
It is the ‘thinkers’ – and here we must be candid – whose role faces the more fundamental disruption. The cognitive work of diagnosis, differential reasoning, risk stratification, and treatment selection is precisely the domain in which AI, trained on vast datasets and free from the fatigue, bias, and variability of human cognition, is advancing most rapidly. The doctor as information processor – taking a history, ordering tests, interpreting results, and formulating a plan – is a role that the DPHR ecosystem, over time, will perform with increasing competence. This does not mean the end of the doctor. It means the reinvention of the doctor.
The question we must ask is: what is the irreducible added value of human clinical interaction, and when is it genuinely warranted? The answer lies in several domains that AI cannot yet – and may never fully – inhabit: the navigation of genuine clinical complexity and uncertainty; the integration of medical judgement with the patient’s values, fears, and life circumstances; the delivery of difficult news and the holding of human distress; and the exercise of ethical reasoning in situations where the right course of action is not calculable but must be judged. These are not peripheral activities. They are the heart of medicine, and they deserve to be resourced and valued as such – rather than buried, as they currently are, beneath a mountain of tasks that do not require them.
This reimagining has profound implications for how doctors are trained and what skills are most valued. If the DPHR and its AI layer handle the routine, the technical, and the algorithmic, then medical education must orient itself towards what remains: clinical reasoning in conditions of genuine uncertainty, communication, ethical judgement, and – critically – the skills needed to work alongside, interrogate, and where necessary challenge AI-mediated recommendations. The doctor of the future will need to be, in part, a clinical scientist of AI outputs: understanding how models are trained, where they are likely to fail, what biases they may carry, and when their recommendations should be overridden.
There is a further dimension worth drawing out explicitly. Medicine has always operated on two distinct planes simultaneously: the delivery of what we already know, and the advancement of what we do not yet know. The former – the application of established evidence to the care of individual patients – represents the vast bulk of clinical activity. The latter – the research, the clinical trials, the hard thinking at the edge of human knowledge – is where medicine renews itself, and where many of its most gifted practitioners feel most alive. The tragedy of our current system is that these two endeavours are in constant competition for the same scarce resource: the time and cognitive energy of skilled clinicians. AI’s most profound contribution to medicine may ultimately be not what it does at the bedside, but what it frees the medical mind to do elsewhere. By closing the gap between what we know and what we routinely deliver – consistently, tirelessly, and at scale – it could liberate the thinkers of medicine to do what they alone can do: push the frontier forward.
Long Live the NHS
The final and perhaps most systemic objection is institutional: what does a patient-centred, AI-mediated model of healthcare mean for the NHS itself? This is where the argument becomes most politically charged, and where honesty requires a degree of care. The NHS is not, despite its branding, a single organisation with a unified purpose and a coherent will. It is an ecosystem – a vast, loosely federated collection of trusts, commissioners, regulators, quangos, and agencies, each with its own governance, its own incentives, and its own definition of success. What binds them is not a shared operating model but a shared political identity – and it is politics, ultimately, that shapes the NHS’s priorities in ways that frequently distort its core clinical mission.
The result is a system that expends extraordinary energy managing upwards – satisfying ministerial imperatives, hitting politically constructed targets, and generating the data that answers the political question rather than the operational one. Innovation is announced but rarely embedded. Transformation is piloted endlessly and scaled almost never. The NHS cannot easily stop what it is doing in order to start something new, because what it is doing is load-bearing – politically, institutionally, and financially.
None of this is a counsel of despair, nor is it an argument against the NHS. The founding principle – that healthcare should be available to all, free at the point of need, based on clinical necessity rather than the ability to pay – remains not just admirable but essential. The problem is not the mission. The problem is that the institutional architecture built to deliver that mission has, over decades, become an end in itself. The preservation of institutions, the protection of professional boundaries, and the management of political risk have quietly displaced the patient as the system’s organising principle – which is, of course, precisely the argument this essay has been making from the outset.
The DPHR does not threaten the NHS’s mission. In a profound sense, it is an attempt to rescue it – to rebuild the delivery architecture around the founding promise rather than the institutional accretion that has grown up around it. But this will require the NHS to do something it has historically found almost impossible: to look clearly at the ways in which its own structures have become part of the problem, and to relinquish the institutional control that the current model depends upon. That is a political challenge as much as a technical one. It will require political leadership willing to take on powerful institutional interests in the name of patients – and a clinical and managerial workforce willing to reimagine its own role. Neither can be taken for granted. But the alternative – continuing to pour resource into a system that is structurally incapable of delivering the transformation it perpetually promises – is no longer a viable option.
Conclusion
The Digital Personal Health Record is not merely a technological proposal – it is a philosophical reorientation of healthcare. For too long, the system has been built around institutions that accumulate patient data as a byproduct of transactions, rather than around individuals whose lifetime health story ought to belong to them. The DPHR would reverse that ownership, placing patients at the centre of their own care and unlocking the transformative potential of AI in a way that top-down, institution-first digitisation never could.
The objections are real – questions of trust, safety, equity, professional identity, and institutional inertia must be taken seriously and worked through, not dismissed. But as we have seen, these are implementation challenges, not reasons to abandon the vision. Other industries – banking, travel, retail – have already made this shift, placing data and agency in the hands of the individual. Healthcare, despite its unique complexity and the profound stakes involved, cannot remain the exception indefinitely.
The question is not whether this transformation will happen, but whether the NHS will lead it, shape it, and ensure it serves everyone – or whether it will be swept along by forces beyond its control. The DPHR offers a framework to take the initiative, to rescue the NHS’s founding mission from the institutional accretion that has obscured it, and to build a system that works better for every patient – and transformatively better for those our current system serves least well. The status quo is not sustainable. The time to act is now.










A thought provoking piece that provides a more optimistic view of human intelligence compared to AI. Just as an athlete…
[…] https://rogerstedman.com/wp/blameworthy-acts-in-a-blame-free-safety-culture/ […]
[…] that most people would agree we need to promote, foster and support in healthcare. This piece; Blameworthy acts in…
Great blog. Your matrix, which shows psychological safety as being a pre-requisite for learning, aligns with Deming's (usually overlooked) principle…
Great post Roger. Spot on. Having now stopped clinical practise but still deal with clinicians on a daily basis, I…