The Revolution That Has Not Come

I have repeatedly asserted in my previous blogs that the information revolution has the power to utterly transform healthcare. This belief has driven my career choices and how I spend my time – now dividing it equally between clinical practice and digital transformation. But as I have also observed – the revolution has been slow to come. Notwithstanding the now almost ubiquitous deployment of electronic health information systems and the hubris surrounding AI – it does not feel like the ‘digital agenda’ has delivered the benefits it promised at the outset. Why is that?

It’s the Operating Model – Stupid

Digital has been top of the ‘agenda’ for well over 20 years in healthcare. There hasn’t been an NHS Plan where digital has not been a key pillar for its delivery – including this government’s latest 10-year plan. But despite billions of pounds having been spent – resulting in the deployment and re-deployment of ever more sophisticated technology in the health setting – the way healthcare is delivered has not fundamentally changed. The doctors joining the equivalent training schemes today that I joined 30 years ago are doing exactly the same job that I did in more or less exactly the same way – albeit recording it on a computer instead of on paper. So are the nurses and allied health professionals. A patient’s experience of care is identical – the care settings (GP surgeries, outpatient departments, wards and operating theatres) the same – as well as the cumbersome booking process required to access them (possibly even worse).

I am currently presiding as CCIO over a major deployment of an EPR across a three-hospital system in the south of the UK. The task of reconciling three separate hospitals’ operating processes and bringing about agreement on convergence of clinical pathways and practice is phenomenally complex. The task is littered with wicked problems – interestingly more of these occupy the operational sphere than the clinical sphere. We have benefited from decades of evidence-driven clinical practice and national guidance which means there is fairly broad agreement on clinical pathways. How you run your hospital is an evolved phenomenon driven by the history and geography of a particular institution. Codifying all this in a shared EPR is possible – but takes time, money and infinite patience.

This is the core of the problem – digital hasn’t delivered because we have digitised a flawed operating model. The operating model for the delivery of healthcare is riven with discontinuities – because it is designed around institutions. Our operating managers and professionals spend a huge amount of time smoothing over these discontinuities – shepherding patients through the system. Sometimes with digital systems helping and sometimes with them getting in the way.

We need to re-design our operating model with the only agent of continuity in the system at its heart – the patient.

How Radical Is That?

What does a patient-centred operating model actually look like? Notwithstanding the often-repeated mantra of ‘patient centred care’ – this is, in our current system, a mere slogan to which institutions pay lip service but make no efforts to design themselves around. Our current operating model is firmly institution-based, and within those institutions, service-based. The accountability, flow of money, performance metrics and digital systems all serve to entrench the siloed operating model – leaving the hapless patient to navigate the dangerous gaps between.

Not only does the patient have to navigate the gaps – but they do so having left vital information behind – the institutions having confiscated their information and locked it away in digital vaults or even physical ones. The rules of confidentiality and GDPR prevent those institutions sharing that information even when it is in the patient’s interests to do so. And those same institutions guard that information jealously even from the patients themselves – fostering ignorance and dependence.

This has to change. If we are to bring about the left shift of secondary to primary and primary to self-care – then we need to place the enablement of self-care as the preeminent goal of the operating model. This starts with placing information and knowledge in the hands of patients. We have to upturn the institution/patient hierarchy – making the providers of healthcare servants to the empowered patient.

The Digital Personal Health Record

The spine of the patient-centred operating model would be a digital one – the Digital Personal Health Record (DPHR). It is something we will all have – whether we are patients or not. A bit like a bank account – whilst hosted in the cloud there would be a strong sense of personal ownership of the DPHR – you would know what’s in it, you would give permission to others to add to it and you would decide who has access to see its content. It would be a longitudinal record of every healthcare transaction you experience on your lifelong healthcare journey. It would follow you to every healthcare setting you encounter – both public and private, hospital and community. You would have the power to give access to any service or agent that you feel adds value to you.

And this is where the power of our upturned operating model is truly unleashed – because in the hands of patients the detailed granular health information will be exposed to a new generation of health IT powered by AI. The doctor in your pocket will become a reality – and that doctor will soon know you better than you know yourself. That doctor in your pocket will have insights from access to millions of health records and billions of health transactions that no human doctor could possibly have. That doctor in your pocket will empower you to care for yourself.

Now that is radical.

But…

There will be strong resistance to this notion, counter-arguments and powerful cases made to maintain the status quo. Can patients be trusted to keep their own health record, make wise decisions and treat themselves? How do we assure the safety of such a system? How could such a model assure equity of access and be trusted not to leave the vulnerable and disadvantaged out in the cold? What does this mean for the human professionals required to deliver healthcare? And what does it mean for our institutionally anchored NHS? These objections deserve to be met seriously, and one by one.

Patients doing it for themselves

Perhaps the most instinctive objection to the DPHR is the paternalistic one: can patients really be trusted to manage their own health records, make sound decisions, and take responsibility for their own care? The honest answer is that they can – and moreover, that they should. The assumption that patients cannot be trusted is itself a product of the institution-centred model we are trying to dismantle. Healthcare professionals are not, by virtue of their training, reliable arbiters of a patient’s best interests; we bring our own biases, heuristics, and blind spots to every clinical encounter. Patients, by contrast, are the only agents with continuous, lived knowledge of their own bodies, priorities, and circumstances. What patients have historically lacked is not the capacity for wise decision-making, but access to the information and tools that would enable it.

This is precisely where AI changes the calculus. A well-designed AI – drawing on the depth and breadth of the DPHR, and trained across millions of comparable health journeys – could support patients to understand their condition, weigh their options, and navigate the system with confidence. Crucially, it would also know when to refer: directing patients to human professionals not for every query, but for those that genuinely require clinical judgement, complex intervention, or human connection. The result would be a healthcare workforce liberated to do what only humans can do – while the vast middle ground of routine monitoring, minor illness, and health maintenance is handled closer to the patient, by the patient, with intelligent support. This is not a diminishment of professional expertise; it is a better use of it.

Taming the jungle

A second, and entirely legitimate, objection concerns safety – in two distinct but related senses. The first is informational security: a lifelong, granular health record held in the cloud represents an extraordinarily valuable and sensitive asset. The risks of breach, misuse, or exploitation are real. Yet this challenge is not unique to healthcare, and it is not insurmountable. Banking, legal services, and government have all developed robust frameworks for protecting highly sensitive personal data in digital environments. The DPHR would demand the same rigour – strong encryption, patient-controlled permissioning, and clear legal liability for those who misuse access – but the technology and regulatory architecture to achieve this already exists in embryonic form.

The second and perhaps more complex safety question concerns the quality of the system’s outputs: how do we know the AI is giving good advice? How do we protect patients from the digital equivalent of the quack – the rogue application, the poorly trained model, the commercial interest dressed as clinical guidance? Here, regulation must evolve to meet the technology. Just as medicines are licensed, clinical devices certified, and professionals registered, AI health tools operating within the DPHR ecosystem would need to meet defined standards of evidence, transparency, and accountability before being granted access. An independent regulatory body – akin to the MHRA but for AI-driven health applications – would be responsible for approving tools, monitoring their outputs, and withdrawing accreditation where safety signals emerge.

Critically, the architecture of the system itself should generate the data needed for this oversight: every recommendation made, every referral prompted, every outcome recorded becomes part of a continuous audit trail. In this sense, the DPHR ecosystem could, over time, be safer than the current model – not despite its scale, but because of it. The signal-to-noise ratio in a system processing millions of health journeys simultaneously would make the detection of harmful patterns faster, and the correction of them more decisive, than anything achievable through today’s fragmented and largely opaque clinical landscape.

Justice and Equity

The equity objection is perhaps the most difficult to dismiss, and it deserves to be met with honesty rather than optimism. There is an uncomfortable truth at the heart of the digital agenda: the benefits of digitisation have not been evenly distributed. Across healthcare, financial services, and public administration, the shift to digital-first delivery has quietly but systematically disadvantaged those who are older, less educated, less affluent, or living with cognitive or sensory impairment. A model that places the DPHR at the centre of healthcare delivery risks amplifying these existing inequalities – creating, in effect, a two-tier system in which the digitally confident thrive while the most vulnerable are left further behind. This risk must be named, designed against, and actively monitored – not treated as an edge case to be solved later.

And yet the relationship between digital technology and disadvantage is not straightforwardly one of exclusion. For many people who struggle to navigate traditional, institution-based systems – whether because of anxiety, language barriers, disability, distrust of authority, or simply the friction of booking an appointment and sitting in a waiting room – digital tools can be profoundly empowering. The same paternalistic structures that fail patients in general tend to fail marginalised patients most of all. A well-designed DPHR, with accessible interfaces, multilingual support, and the ability to delegate trusted access to a carer or advocate, could lower barriers that our current system raises.

The honest position, then, is this: digital inclusion is a precondition of this model, not an afterthought. The rollout of the DPHR would need to be accompanied by serious, sustained investment in digital literacy and access – and the design of the system itself must centre the needs of the most vulnerable from the outset, not retrofit them at the end. The goal is not a healthcare system that works brilliantly for the many and abandons the few. It is one that works better for everyone, and transformatively better for those our current system serves least well.

The New Model Doctor

No transformation of this scale could leave the healthcare workforce untouched, and it would be dishonest to pretend otherwise. The question is not whether the roles of healthcare professionals will change, but how – and for whom the change will be most profound. The answer, perhaps counterintuitively, is that those whose work is most physically grounded are likely to be most resilient. The nurses, paramedics, physiotherapists, and surgeons whose value lies in what their hands, eyes, and presence can do – the irreducibly human acts of physical examination, surgical intervention, wound care, and compassionate touch – will remain indispensable in ways that no AI system can replicate. The ‘doers’ of healthcare are, in this sense, relatively protected.

It is the ‘thinkers’ – and here we must be candid – whose role faces the more fundamental disruption. The cognitive work of diagnosis, differential reasoning, risk stratification, and treatment selection is precisely the domain in which AI, trained on vast datasets and free from the fatigue, bias, and variability of human cognition, is advancing most rapidly. The doctor as information processor – taking a history, ordering tests, interpreting results, and formulating a plan – is a role that the DPHR ecosystem, over time, will perform with increasing competence. This does not mean the end of the doctor. It means the reinvention of the doctor.

The question we must ask is: what is the irreducible added value of human clinical interaction, and when is it genuinely warranted? The answer lies in several domains that AI cannot yet – and may never fully – inhabit: the navigation of genuine clinical complexity and uncertainty; the integration of medical judgement with the patient’s values, fears, and life circumstances; the delivery of difficult news and the holding of human distress; and the exercise of ethical reasoning in situations where the right course of action is not calculable but must be judged. These are not peripheral activities. They are the heart of medicine, and they deserve to be resourced and valued as such – rather than buried, as they currently are, beneath a mountain of tasks that do not require them.

This reimagining has profound implications for how doctors are trained and what skills are most valued. If the DPHR and its AI layer handle the routine, the technical, and the algorithmic, then medical education must orient itself towards what remains: clinical reasoning in conditions of genuine uncertainty, communication, ethical judgement, and – critically – the skills needed to work alongside, interrogate, and where necessary challenge AI-mediated recommendations. The doctor of the future will need to be, in part, a clinical scientist of AI outputs: understanding how models are trained, where they are likely to fail, what biases they may carry, and when their recommendations should be overridden.

There is a further dimension worth drawing out explicitly. Medicine has always operated on two distinct planes simultaneously: the delivery of what we already know, and the advancement of what we do not yet know. The former – the application of established evidence to the care of individual patients – represents the vast bulk of clinical activity. The latter – the research, the clinical trials, the hard thinking at the edge of human knowledge – is where medicine renews itself, and where many of its most gifted practitioners feel most alive. The tragedy of our current system is that these two endeavours are in constant competition for the same scarce resource: the time and cognitive energy of skilled clinicians. AI’s most profound contribution to medicine may ultimately be not what it does at the bedside, but what it frees the medical mind to do elsewhere. By closing the gap between what we know and what we routinely deliver – consistently, tirelessly, and at scale – it could liberate the thinkers of medicine to do what they alone can do: push the frontier forward.

Long Live the NHS

The final and perhaps most systemic objection is institutional: what does a patient-centred, AI-mediated model of healthcare mean for the NHS itself? This is where the argument becomes most politically charged, and where honesty requires a degree of care. The NHS is not, despite its branding, a single organisation with a unified purpose and a coherent will. It is an ecosystem – a vast, loosely federated collection of trusts, commissioners, regulators, quangos, and agencies, each with its own governance, its own incentives, and its own definition of success. What binds them is not a shared operating model but a shared political identity – and it is politics, ultimately, that shapes the NHS’s priorities in ways that frequently distort its core clinical mission.

The result is a system that expends extraordinary energy managing upwards – satisfying ministerial imperatives, hitting politically constructed targets, and generating the data that answers the political question rather than the operational one. Innovation is announced but rarely embedded. Transformation is piloted endlessly and scaled almost never. The NHS cannot easily stop what it is doing in order to start something new, because what it is doing is load-bearing – politically, institutionally, and financially.

None of this is a counsel of despair, nor is it an argument against the NHS. The founding principle – that healthcare should be available to all, free at the point of need, based on clinical necessity rather than the ability to pay – remains not just admirable but essential. The problem is not the mission. The problem is that the institutional architecture built to deliver that mission has, over decades, become an end in itself. The preservation of institutions, the protection of professional boundaries, and the management of political risk have quietly displaced the patient as the system’s organising principle – which is, of course, precisely the argument this essay has been making from the outset.

The DPHR does not threaten the NHS’s mission. In a profound sense, it is an attempt to rescue it – to rebuild the delivery architecture around the founding promise rather than the institutional accretion that has grown up around it. But this will require the NHS to do something it has historically found almost impossible: to look clearly at the ways in which its own structures have become part of the problem, and to relinquish the institutional control that the current model depends upon. That is a political challenge as much as a technical one. It will require political leadership willing to take on powerful institutional interests in the name of patients – and a clinical and managerial workforce willing to reimagine its own role. Neither can be taken for granted. But the alternative – continuing to pour resource into a system that is structurally incapable of delivering the transformation it perpetually promises – is no longer a viable option.

Conclusion

The Digital Personal Health Record is not merely a technological proposal – it is a philosophical reorientation of healthcare. For too long, the system has been built around institutions that accumulate patient data as a byproduct of transactions, rather than around individuals whose lifetime health story ought to belong to them. The DPHR would reverse that ownership, placing patients at the centre of their own care and unlocking the transformative potential of AI in a way that top-down, institution-first digitisation never could.

The objections are real – questions of trust, safety, equity, professional identity, and institutional inertia must be taken seriously and worked through, not dismissed. But as we have seen, these are implementation challenges, not reasons to abandon the vision. Other industries – banking, travel, retail – have already made this shift, placing data and agency in the hands of the individual. Healthcare, despite its unique complexity and the profound stakes involved, cannot remain the exception indefinitely.

The question is not whether this transformation will happen, but whether the NHS will lead it, shape it, and ensure it serves everyone – or whether it will be swept along by forces beyond its control. The DPHR offers a framework to take the initiative, to rescue the NHS’s founding mission from the institutional accretion that has obscured it, and to build a system that works better for every patient – and transformatively better for those our current system serves least well. The status quo is not sustainable. The time to act is now.

A hand holding a green apple is being offered to a child's hand in a warm, indoor setting.

It has felt in recent times that our faith and confidence in the supremacy of human intelligence has been under assault. New technology has come along that genuinely challenges our long-held view of the superiority of the human mind, not only this, but also recent global political events have starkly demonstrated some of the frailties of human intelligence. It has felt like some of the greatest achievements of human collaboration and civilisation are in retreat, have passed their peak and are no longer valued. The proliferation of powerful AI tools that discharge tasks with apparent ease such as the crafting of well-written text and the drafting of compelling illustrations and graphic images have undermined and devalued skills previously thought to be uniquely human.

However, I believe there are grounds for optimism and for us to continue to have faith in the intellectual capacity and superiority of the human mind. Here are my reasons for doing so:

1) Purpose

Homo sapiens evolved its oversize brain for a reason. The brutal forces that drive evolution and selection pressure mean that human intelligence was selected for because it gave our species an adaptive advantage despite its enormous costs. This advantage is manifest in the evident success of our species having colonised almost every habitable corner of this planet.  Our intelligence empowers us as biological entities to extract energy from our environment that is inaccessible to every other living thing and turn it into a powerful survival advantage.  We have found ways to turn non-biological energy sources into biological amenity and advantage.  The survival imperative which gives our intelligence purpose has driven a supercharged evolutionary process that transcends and outpaces biological evolution and which manifests as the ‘super-organism’ that is human culture and has resulted in achievements that far outstrip the core purpose of mere survival advantage.  The redundant capacity of human intelligence, particularly when pooled collaboratively, has achieved astonishing things at an ever-increasing pace.  The limits have barely been tested – there is much more to come.

Artificial intelligence has no such purpose.  In the absence of human patronage and curation AI has nothing to do.  AI has no intrinsic survival imperative and the only pressure on the AI ecosystem driving improvement comes from the requirements and behest of its human creators and masters.   Some of the more histrionic AI doomsayers have suggested that AI might attain a sense of self that has a survival instinct and might therefore want to start ‘taking over the world’ and ultimately usurp its own creators.  The obvious question to ask in response to this is why?  An AI entity has no competitors and no survival imperative – it has no capacity to care if it lives or dies and is provided all its energy needs by its benevolent benefactors – us.  There are no external forces driving an AI entity to seek to do ‘more’ – than what we tell it to.  This is not to say that in the hands of a human intelligence with malign intent that AI would not be a powerful weapon – but in that circumstance the intent (and culpability) lies wholly with the human master.

2) Energy

There is no doubt that modern AI systems are incredibly powerful.  They have access to gargantuan data banks and can access and process it incredibly rapidly – in this AI far outstrips a human brain.  But it does so at a massive energy cost compared to a human brain.  

As I alluded to earlier, we benefit from our intelligence despite the costs.  The human brain consumes 20% of the body’s metabolic energy production – which amounts to about 270 calories per day.  75% of that is spent on ‘thinking’ whilst the other 25% addresses the basic metabolic needs of the brain.  This means, if we for comparison purposes convert this to electrical energy, the average human consumes approximately 12 milliwatts of power thinking a day.  Multiply this by the total human population this is in the order of magnitude of 200 megawatts of power expended on human thought daily.  Were we to be powered by electricity we would need little more than a single gas-fired power station (or about 100 wind turbines) to power the entirety of human thinking.

The current power consumption of the vast cities of data centres that are the seat of modern AI is approximately 55 gigawatts a day and is rapidly increasing.  This is in the order of 275 times more energy than human thinking.  That is a lot of energy needed just to tidy up the sloppy grammar in your latest blog post…If you really really want to make a profound difference in the world of AI today, then focussing on energy consumption of the chips that power AI is the place to go.

3) Learning efficiency 

From the moment we take our first breath (and quite possibly some time before that) the human brain is in learning mode.  We are wired for learning and are very very good at it.  Human learning is a multidimensional multimodal process, and it is encased in a mobile explorative entity.  Watching the discovery behaviours of children is a thrilling thing to behold and the speed at which they learn is astounding.  And it doesn’t end in childhood – learning is a lifelong endeavour which gives us joy – because it’s what we are meant to do.

By multidimensional I mean that knowledge is generated through access to many aspects of perception – not only different sensory modalities but also powers of reasoning, abstraction, recollection and rulemaking.  So for example a child when handed an apple for the first time in its life will very rapidly surmise that it is something to be tasted and eaten – not only because a parent might tell it to but also because of its look, size texture, flavour, smell all of which may be reminiscent of other types of fruit it has had or also because of previous experiences of apple given to it perhaps in another form (pureed for example!).   The initial impression will prompt exploratory behaviour such as looking at it and picking it up (instantly conveying colour, size, texture, weight, moistness, stickiness), smelling it (sweetness, appleness) and tasting it (flavour, texture).  All these things, as well as the context in which the learning event takes place, mean that a child needs very few examples of an apple to learn what it is – probably only one or two. Every encounter with an apple in the future will serve to reinforce and evolve the now established knowledge of ‘Apple’.  The way I have described this makes it sound all very deliberate and explicit – but of course it isn’t – it is totally instinctive and taking place at a sub-conscious level.  The only explicit, and perhaps somewhat more laborious aspect of the learning comes in when the parent gives it a verbal tag “would you like to try this apple?” 

By multimodal I am referring to the fact that we learn in many different ways.  We learn through exploration, experimentation and discovery.  We learn through language, stories and sharing of knowledge with others.  We learn through abstraction, association, synthesis and projection.  We learn through categorization, comparison and rulemaking.  No doubt there are other methods as well (I am not an educational expert).   The point is that we use all these methods of learning probably simultaneously and quite often without even realising we are doing it.  By having a multi-modal learning strategy, we become extremely efficient and quick learners.  Anyone currently labouring through the education system is probably thinking that their learning does not feel very quick or efficient – this is probably because our education system is focussed on only one type of learning – which is learning through codified knowledge in the form of language (which in itself comes in many forms – words, numbers, music, images etc.).  Codified knowledge transfer is probably the least efficient form of learning for an individual, it is explicit, laborious and requires effort (and energy).  However codified knowledge is what has given humanity its collective knowledge base- its shared knowledge – its culture – which is not only passed horizontally between individuals but also carried longitudinally between generations and has grown and continues to grow into a thing that is vastly more than the sum of its parts. 

The way AI learns is not efficient.  For an AI algorithm to ‘know’ what an apple is typically requires exposure to thousands, possibly tens of thousands of images for it to ‘recognise’ it in a picture.  For it to be able to express linguistically what an apple ‘is’ it would need access to mountainous quantities of written information about apples – and then to articulate this, access to its ‘large language model’ for it to reproduce a meaningful piece of narrative.  AI can and does do this – on the face of it extremely well.  But in doing so I guarantee it will have burnt through more energy than a toddler will consume in a lifetime of breakfasts.  Even then having produced an immaculate piece of text explaining what an apple is – even illustrating it with a perfect archetype of an apple – does it actually ‘know’ anything?  Isn’t what AI has done simply a synthesis and distillation of pre-existing human knowledge? And then next time you ask AI the same question – will it not just go through the same energy intensive synthetic process to re-create the same piece of narrative?  The reality is that AI has only really got one learning strategy and that is a stochastic probabilistic one – the Bayesian adaptive statistical model. 

4) Empathy

One of the most impressive aspects of human intelligence is its capacity for empathy.  By this I mean its ability to know what another human is thinking and feeling without any obvious or explicit communication.  The level of synergy in the way we think means that we are able if not actually feel then deeply understand, and participate in, other people’s joy, sadness, pain, pleasures, hopes and despair.  Empathy drives a level of common purpose and collaboration to bring about a better world around us that is unparalleled in any other social species.  It can of course also be abused for inflicting great cruelty – but I think it’s fair to say that empathy overwhelmingly drives positive behaviours in society.

Having empathy means we moderate our responses to other people to preserve their feelings and wellbeing.  It also gives us the capacity for humour and irony – as we can give a response that is unexpected knowing that the recipient knows you meant the opposite. 

Empathy allows us to participate in shared experiences such as attending music concerts, plays and sporting events.  It creates powerful societal bonds that underpin the human meta-organism that is human culture.

As far as I can tell empathy exists in AI only as a linguistic concept.  Many AI chatbots can simulate empathy – because the synthesis of their LLMs means they can see that linguistic tokens of empathy pepper our language.  I have to say my experience of AI is that the apparent empathy borders on sycophancy.  In the same way that a salesperson is trained to make you feel good about the purchase choice you have just made – AI is trained to fawn over the questions you ask to incentivise you to go back for more.   There is danger here because the slickness of the language model means that it is very easy to fall for the flattery – but unlike in a human interaction where well-judged flattery induces mutual feelings of wellbeing – in the case of AI it is there for a purpose and any feelings induced are very much one way.

I tried a little experiment (Alert – If you are a ‘believer’ stop reading and jump to the next section!).  If someone comes up to you and asks, ‘Does Father Christmas exist?’ how would you respond?  My guess is that you wouldn’t just come straight out with an answer – you might take some time to assess your interrogator and try and work out what the motive for the question is.  Based on your assessment you have a range of answering strategies from brutal truth to empathic lies – with plenty of scope for irony in between.  Now try and ask ChatGPT – what do you get?  The somewhat pedestrian and brutally truthful response is about as far from empathic as it is possible to get.

5) Opposable Thumbs

As I observed earlier in this essay our intelligence is contained entirely within us – it is a fundamental part of our biology.  We are free to move around our environment carrying our intelligence with us – in fact using it continuously to remain safe within that environment.  Not only that but we are fortunate to have evolved alongside our oversized brain high acuity visual perception, high fidelity auditory perception and incredibly dextrous upper limbs and hands.  This turns us into the ‘maker’ species – we can use our mechanical intelligence to turn materials in our environment into tools that dramatically extend our dextrous capabilities to a phenomenal extent.  The technologies that exist today stand as testament to the astonishing creativity and inventiveness of humanity – the depth and breadth of its intelligence which has given us tools that can take us to new planets, tools with which we can manipulate our own biology, the means to fight and eliminate disease, invent new foodstuffs and – yes- tools which mimic our very own intelligence.

AI is amazing – but it is not yet a maker intelligence.  It remains bound to its datacentres and server racks.  Yes you can attach robotic arms and other tools to AI controllers and tell it to make stuff – but it is not yet free to roam the environment and invent things for itself – that for now remains the stuff of sci-fi fantasy.

6) Imagination

One of the most confounding things about the human brain is the size of the frontal cortex.  This very large part of the brain doesn’t appear to have much of a biological function – yet contains a vast number of interconnected neurones, firing apparently randomly, and consuming a significant part of the metabolic energy dedicated to the brain.  What is this ‘expensive’ organ actually doing?  It has often been described that the number of possible permutations of neural connections within a human cortex could exceed the total number of atoms in the universe.  Whether this is true or not in reality is immaterial – the fact is that we carry in our heads a neural network of profound depth and interconnectedness the like of which has yet to be even closely emulated in electronic form.  It is impossible to fully articulate the cognitive powers this gives us – a sign if anything of the limitations of codified knowledge.   It is the frontal cortex that give us our powers of abstraction and projection – it not only helps us to process and understand the world around us, it takes that understanding reforms it and projects a potential new world which then drives us to behave in a way to create that imagined world.  The frontal cortex is the seat of our imagination and creativity – it is fundamentally what makes us human.  

Notwithstanding the expense of this organ in biological terms in neural network computing terms it is absurdly efficient.  At a physical level it is a six layer neural network – on the face of it eminently emulate-able.  However the cortex has properties that make it quite unlike computer neural nets.  The first of these is its organisation into columns – of which there are millions – each column functions effectively as an independent six layer network all working in parallel.  Through this architecture the cortex massively reduces the trade-off of signal distance experienced with the increasing layer depth of artificial neural networks and massively increases effective network depth.  The second property that differs in the human cortex is the computing method – which is reliant on recurrence and signal looping.  Through this the brain is not only physically parallel but also chronologically parallel – signals can reinforce and coalesce over time.  The final property that makes the brain a truly supercomputer is plasticity – the actual physical structure of the brain changes with use – reinforcing ‘useful’ pathways and allowing redundant ones to fade away.  This gives rise to learning from new experience, adaptation to new environments, the ability to recover from damage and the ability to specialise over time.  The frontal cortex is truly our superpower.

7) Lifecycle

Time is both our friend and our enemy.  Whilst the human brain can use time to massively amplify parallelism and network depth and thus the power of our intelligence – it also casts a shadow over our lived experience and shapes our intellectual purpose because, of course, our time is limited.  As we move through the stages of a human life cycle our purpose changes and the way we think changes – the brain changes with it, both in form and function.  That it does so is testament to the adaptability of our intelligence – as purpose changes so does our brain.  The table below gives a view (my view perhaps) of how purpose changes through life and how the brain adapts:

StagePurposeBrain Adaptation
InfancyDiscovery of the environment, basic motor skills, language learning, creative introductionCortex is hyper-connected and undifferentiated.  Noisy activity, new pathways rapidly forming  
ChildhoodDiscovery of self, early socialisation, primary codified knowledge, creative choiceConsolidation of basic pathways. Large parts remain undifferentiated.
AdolescenceDiscovery of others, partner seeking, secondary codified knowledge, career identification, creative pursuitRapid development of social and sexual functions of the brain.  Lifelong patterns start to become set but large parts remain developmental, pruning of redundant pathways
Early AdulthoodDiscovery of society, parenting, career development, specialised codified knowledge, creative developmentSpecialist functions laid down, parenting skills learnt (rapidly!), continued pruning
Middle AgeInfluencing Society, post-parenting, career achievement / second career pursuit, creative fulfilmentSpecialised knowledge and skills optimised and become hardwired (improved speed and efficiency); plasticity affords opportunity for new skills.
Older AgeSharing experience and wisdom, grandparenting, career conclusion, new creative opportunityEarly decline, plasticity allows adaptation to decline and damage, behaviour changes fend off cognitive inefficiency
SenescenceReflection and legacy, deep memory mining, adaptation to loss of autonomy,Cognitive decline disinhibits deep memory suppression

Needless to say this table makes some gross generalisations, it is very high level – and there is nothing if not considerable overlap between the supposed ‘seven ages of man’.  The point is though the brain both shapes and adapts to our biological needs over time – at every stage it is optimised for the purposes we have at that time.  Even in disease its phenomenal adaptability moderates the worst impacts.  The mechanical nature of AI affords it no such journey.

Conclusion

Artificial intelligence is an extraordinary technological achievement and will continue to reshape how we work, create and solve problems. But its power should not be confused with equivalence. Human intelligence is not merely a faster or slower computational engine — it is a biologically embodied, energetically efficient, purposive and empathetic system that has evolved to learn, adapt, imagine and collaborate across an entire lifetime.

AI has no intrinsic purpose, no lived experience, no developmental journey and no stake in its own existence. Its intelligence is derivative, conditional and dependent on human intent — both for its creation and its application. In contrast, human intelligence is self-directing, self-modifying and embedded within a social and cultural super-organism that amplifies its power far beyond any individual mind.

For these reasons, AI should be understood not as a successor to human intelligence but as one of its most remarkable tools — a product of the very creativity, imagination and collaborative purpose that define us as a species. If we remain clear-eyed about that distinction, there are strong grounds not for fear, but for confidence in the enduring superiority of the human mind.

I attended the recent ‘Digital Health Re-Wired’ conference at Birmingham’s NEC last week. There was a lot of talk about AI – in fact I think the term pretty much featured on every stand and in every stage presentation at the conference. People are excited about AI and wherever you work in healthcare AI is coming to a clinical information system near you…

At this point I need to declare an interest – I absolutely hate the term Artificial Intelligence – I think it is a totally misleading term. In fact I’m pretty sure that there is no such thing as artificial intelligence – it is a term used to glamorise what are without doubt very sophisticated data processing tools but also to obscure what those tools are doing and to what data. In medical research hiding your methods and data sources is tantamount to a crime…

An Intelligent Definition

So what is artificial intelligence? It refers to a class of technologies that consist of certain types of algorithm paired with very large amounts of data. The algorithms used in AI are variously called machine learning algorithms, adaptive algorithms, neural networks, clustering algorithms, decision trees and many variations and sub-types of the same. Fundamentally however, they are all statistical tools used to analyse and seek out patterns in data – much like the statistical tools we are more familiar with such as linear logistic regression. In fact the underpinning mathematics of a learning algorithm such as a neural network was invented in the 18th century by an English Presbyterian Minister, Philosopher and Mathematician – The Reverend Thomas Bayes. Bayes’ Theorem found a way for a statistical model to update itself and adapt its probabilistic outcomes as it is presented with new data. The original adaptive algorithm – which has ultimately evolved into to today’s machine learning algorithms – which are given their power by being hosted on very powerful computers and being fed very very large amounts of data.

The other ingredient that has given modern machine learning tools their compelling illusion of ‘intelligence’ is the development of a technology called large language models (LLMs). These models are able to present the outputs of the statistical learning tools in natural flowing human readable (or listenable) narrative language – i.e. they write and talk like a human. Chat-GPT being the most celebrated example. I wrote about them about 5 years ago (The Story of Digital Medicine) – at which point they were an emerging technology but have since become mainstream and extremely effective and powerful.

Danger Ahead!

Here lies the risk in the hype – and the root cause of some of the anxiety about AI articulated in the press. Just because something talks a good talk and can spin a compelling narrative – doesn’t mean it is telling the truth. In fact quite often Chat-GPT will produce a well crafted beautifully constructed narrative that is complete nonsense. We shouldn’t be surprised by this really – because the source of Chat-GPT’s ‘knowledge’ is ‘The Internet’ – and we all have learned that just because its on the internet doesn’t mean its true. Most of us have learnt to be somewhat sceptical and a bit choosy over what we believe when we do a Google search – we’ve learnt to sift out the ads, not necessarily pick out the first thing that Google gives us and also to examine the sources and their credentials. Fortunately Google is able to give us quite a lot of the contextual information around the outputs of its searches that enables us to be choosy. Chat-GPT on the other hand hides its sources behind a slick and compelling human understandable narrative – a bit like a politician.

The Power of Data

In 2011 Peter Sondergaard – senior vice president at Gartner, a global technology research and consulting company – declared “data eats algorithms for breakfast”. This was in response to the observation that a disproportionate amount of research effort and spending was being directed at refining complex machine learning algorithms yielding only marginal gains in performance compared to the leaps in performance achieved by feeding the same algorithms more and better quality data. See ‘The Unreasonable Effectiveness of Data

I have experienced the data effect myself – back in 1998/99 I was a research fellow in the Birmingham School of Anaesthesia and also the proud owner of an Apple PowerBook Laptop with (what was then novel) a connection to the burgeoning internet. I came across a piece of software that allowed me to build a simple 4 layer neural network – I decided to experiment with it to see if it was capable of predicting outcomes from coronary bypass surgery using only data available pre-operatively. I had access to a dataset of 800 patients of which the majority had had uncomplicated surgery and a ‘good’ outcome and a couple of dozen had had a ‘bad’ outcome experiencing disabling complications (such as stroke or renal failure) or had died. I randomly split the dataset into a ‘training set’ of 700 patients and a ‘testing set’ of 100. Using the training set I ‘trained’ the neural network – giving it all the pre-op data I had on the patients and then telling it if the patients had a good or a bad outcome. I then tested what the neural network had ‘learned’ with the remaining 100 patients. The results were ok – I was quite pleased but not stunned, the predictive algorithm had an area under the ROC curve of about 0.7 – better than a coin toss but only just. I never published, partly because the software I used was unlicensed, free and unattributable but mainly because at the same time a research group from MIT in Boston published a paper doing more or less exactly what I had done but with a dataset of 40,000 patients – their ROC area was something like 0.84, almost useful and a result I couldn’t come close to competing with.

Using AI Intelligently

So what does this tell us? As practicing clinicians, if you haven’t already, you are very likely in the near future to be approached by a tech company selling an ‘AI’ solution for your area of practice. There are some probing questions you should be asking before adopting such a solution and they are remarkably similar to the questions you would ask of any research output or drug company that is recommending you change practice:

  1. What is the purpose of the tool?
    • Predicting an outcome
    • Classifying a condition
    • Recommending actions
  2. What type of algorithm is being used to process the data?
    • Supervised / Unsupervised
    • Classification / Logistic regression
    • Decision Tree / Random Forrest
    • Clustering
  3. Is the model fixed or dynamic? i.e. has it been trained and calibrated using training and testing datasets and is now fixed or will it continue to learn with the data that you provide to it?
  4. What were the learning criteria used in training? i.e. against what standard was it trained?
  5. What was the training methodology? Value based, policy based or model based? What was the reward / reinforcement method?
  6. What was the nature of the data it was trained with? Was it an organised labeled dataset or disorganised unlabelled?
  7. How was the training dataset generated? How clean is the data? Is it representative? How have structural biases been accounted for (Age, Gender, Ethnicity, Disability, Neurodiversity)?
  8. How has the model been tested? On what population, in how many settings? How have they avoided cross contamination of the testing and training data sets?
  9. How good was the model in real world testing? How sensitive? How specific?
  10. How have they detected and managed anomalous outcomes – false positives / false negatives?
  11. How do you report anomalous outcomes once the tool is in use?
  12. What will the tool do with data that you put into it? Where is it stored? Where is it processed? Who has access to it once it is submitted to the tool? Who is the data controller? Are they GDPR and Caldecott compliant?

Getting the answers to these questions are an essential pre-requisite to deploying these tools into clinical practice. If you are told that the answers cannot be divulged for reasons of commercial sensitivity – or the person selling it to you just doesn’t know the answer then politely decline and walk away. The danger we face is being seduced into adopting tools which are ‘black box’ decision making systems – it is incumbent on us to understand why they make the decisions they do, how much we should trust them and how we can contribute to making them better and safer tools for our patients.

An Intelligent Future

To be clear I am very excited about what this technology will offer us as a profession and our patients. It promises to democratise medical knowledge and put the power of that knowledge into the hands of our patients empowering them to self care and advocate for themselves within the machinery of modern healthcare. It will profoundly change the role we play in the delivery of medical care to patients – undermine the current medical model which relies on the knowledge hierarchy between technocrat doctor and submissive patient – and turn that relationship into the partnership it should be. For that to happen we must grasp these tools – understand them, use them intelligently – because if we don’t they will consume us and render us obsolete.

I have read two stories this week.

The first was written in an interesting, contemporary literary style – you know the sort – short sparse sentences almost factual, leaving lots of ‘space’ for your own imaginative inference, not making explicit links between facts and events but leaving you to do that for yourself.  It was a love story, rather charming and quite short, describing a familiar narrative of boy meets girl, invites her to the cinema and they fall in love (probably).  It could be described as Chandleresque in style – though it isn’t that good – in fact it could have been written by an 11+ student.  It wasn’t though – it was in fact written by a computer using a form of artificial intelligence called natural language generation with genuinely no human input.  You can read how it was done here.

The second story I read is a description of a falling out of love – of the medical profession with the IT industry and the electronic patient record.  This one is very well written by Robert Wachter and is a warts and all recounting of the story of the somewhat faltering start of the digital revolution in healthcare.  It is called ‘The Digital Doctor’ and I would highly reccomend you read it if you have any interest in the future of medicine.  It is not the manifesto of a starry eyed digital optimist, nor is it the rantings of a frustrated digital skeptic – he manages to artfully balance both world views with a studied and comprehensive analysis of the state of modern health IT systems.  His realism though extends to understanding and articulating the trajectory of the health IT narrative and where it is taking us – which is a radically different way of delivering medical care.  I won’t use this blog to precis his book – its probably better if you go and read it yourself.

From Data to Information to Understanding

The falling out that Dr Wachter describes really is quite dramatic – this is the United States the most advanced healthcare system in the world – yet there are hospitals in the US that advertise their lack of an EPR as a selling point to attract high quality doctors to work for them.  Where has it gone wrong?  Why is the instant availabilty not only of comprehensive and detailed information about our patients but also a myriad of decision support systems designed to make our jobs easier and safer to carry out – not setting us alight with enthusiasm?  In fact it is overwhelming us and oppressing us  – turning history taking into a data collection chore and treatment decisions into a series of nag screens.

The problem is there is just too much information.  The healthcare industry is a prolific producer of information – an average patient over the age of 65 with one or more long term conditions will see their GP (or one of her partners) 3 – 4 times a year, have a similar number of outpatient visits with at least 2 different specialists and attend A&E at least once.  That doesn’t include the lab tests, x-rays, visits to the pharmacy, nursing and therapy episodes.  Each contact with the system will generate notes, letters, results, reports, images, charts and forms – it all goes in to the record – which, if it is a well organised integrated electronic record, will be available in its entirety at the point of care.

Point of care being the point – most health care episodes are conducted over a very short time span.  A patient visiting his GP will, if he’s lucky, get 10 minutes with her – it doesn’t make for a very satisfactory consultation if 4 or 5 of those minutes are spent with the doctor staring at a screen – navigating through pages of data attempting to stich together a meaningful interpretation of the myriad past and recent events in the patient’s medical history.

How it used to be (in the good old days)

So what is it that the above mentioned hospitals in the US are harking back to in order to attract their doctors?  What is the appeal of how it used to be done when a consultation consisted of a doctor, a patient and a few scrappy bits of paper in a cardboard folder?  Well for a start at least the patient got the full 10 minutes of the doctors attention.  The doctor however was relying on what information though?  What the patient tells them, what the last doctor to see them chose to write in the notes, and the other events that might have made it into their particular version of this patient’s health record.  This gives rise to what I call a ‘goldfish’ consultation (limited view of the whole picture, very short memory, starting from scratch each time).  We get away with it most of the time – mainly because most consultations concern realtively short term issues – but too often we don’t get away with it and patients experience a merry go round of disconnected episodes of reactive care.

IMG_0477

As a practitioner of intensive care medicine one of the things that occupies quite a lot of my time as ‘consultant on duty for ICU’ is the ward referral.  As gatekeeper of the precious resource that is an intensive care bed my role is to go and assess a patient for their suitability for ICU care as well as advise on appropriate measures that could be used to avert the need for ICU.  My first port of call is the patient’s notes – where I go through the entire patient’s hospital stay – for some, particularly medical patients, this might be many days or even weeks of inpatient care.  What I invariably find is that the patient has been under the care of several different teams, the notes consist of a series of ‘contacts’ (ward rounds, referrals, escalations) few of which relate to each other (lots of goldfish medicine even over the course of a single admission).  I have ceased to be surprised by the fact that I, at the point of escalation to critical care, am the first person to actually review the entire narrative of the patient’s stay in hospital.  Once that narrative is put together very often the trajectory of a patient’s illness becomes self evident – and the question of whether they would benefit from a period of brutal, invasive, intensive medicine usually answers itself.

Patient Stories

The defence against goldfish medicine in the ‘old days’ was physician continuity – back then you could  expect to be treated most of your life by the same GP, or when you came into hospital by one consultant and his ‘firm’ (the small team of doctors that worked just for him – for in the good old days it was almost invariably a him) for the whole admission.  They would carry your story – every now and then summarising it in a clerking or a well crafted letter.  But physician continuity has gone – and it isn’t likely ever to come back.

The EPR promised to solve the continuity problem by ensuring that even if you had never met the patient in front of you before (nor were likely ever to meet them again) you at least had instant access to everything that had ever happend to them – including the results of every test they had ever had.  But it doesn’t work – data has no meaning until it is turned into a story – and the more data you have the harder it is and longer it takes to turn it into a story.

And stories matter in medicine – they matter to patients and their relatives who use them to understand the random injustice of disease, it tells them where they have come from and where they are going to.  They matter to doctors as well – medical narratives are complex things, they are played out in individual patients over different timescales – from a life span to just a few minutes, each narrative having implications for the other.  Whilst we don’t neccessarily think of it as such – it is precisly the complex interplay between chronic and acute disease, social and psychological context, genetics and pathology that we narrate when summarising a case history.  When it is done well it can be a joy to read – and of course it creates the opportunity for sudden moment when you get the diagnostic insight that changes the course of a paient’s treatment.

Natural Language Generation

Turning the undifferentiated information that is a patients medical record – whether paper or digital – into a meaningful story has always been a doctor’s task.  What has changed is the amount of information available for the source material, and the way it is presented.  A good story always benefits from good editing – leaving out the superfluous, the immaterial or irrelevant detail is an expert task and one that requires experience and intelligence.  You see it when comparing the admission record taken by a foundation year doctor compared to an experienced registrar or consultant – the former will be a verbatim record of an exchange between doctor and patient, the latter a concise inquisition that hones in on the diagnosis through a series of precise, intelligent questions.

So is the AI technology that is able to spontaneously generate a love story sufficiently mature to be turned to the task of intelligently summarising the electronic patient record into a meaningful narrative? Its certainly been used to that effect in a number of other information tasks – weather forecasts and financial reports are now routinely published that were drafted using NLG technology.  The answer of course is maybe – there have been some brave attempts – but I don’t think we are there yet.  What I do know is that the progress of AI technology is moving apace and it won’t be very long before the NLG applied to a comprehensive EPR will be doing a better job than your average foundation year doctor at telling the patient’s story – maybe then we will fall back in love with EPR? Maybe…