I attended the recent ‘Digital Health Re-Wired’ conference at Birmingham’s NEC last week. There was a lot of talk about AI – in fact I think the term pretty much featured on every stand and in every stage presentation at the conference. People are excited about AI and wherever you work in healthcare AI is coming to a clinical information system near you…

At this point I need to declare an interest – I absolutely hate the term Artificial Intelligence – I think it is a totally misleading term. In fact I’m pretty sure that there is no such thing as artificial intelligence – it is a term used to glamorise what are without doubt very sophisticated data processing tools but also to obscure what those tools are doing and to what data. In medical research hiding your methods and data sources is tantamount to a crime…

An Intelligent Definition

So what is artificial intelligence? It refers to a class of technologies that consist of certain types of algorithm paired with very large amounts of data. The algorithms used in AI are variously called machine learning algorithms, adaptive algorithms, neural networks, clustering algorithms, decision trees and many variations and sub-types of the same. Fundamentally however, they are all statistical tools used to analyse and seek out patterns in data – much like the statistical tools we are more familiar with such as linear logistic regression. In fact the underpinning mathematics of a learning algorithm such as a neural network was invented in the 18th century by an English Presbyterian Minister, Philosopher and Mathematician – The Reverend Thomas Bayes. Bayes’ Theorem found a way for a statistical model to update itself and adapt its probabilistic outcomes as it is presented with new data. The original adaptive algorithm – which has ultimately evolved into to today’s machine learning algorithms – which are given their power by being hosted on very powerful computers and being fed very very large amounts of data.

The other ingredient that has given modern machine learning tools their compelling illusion of ‘intelligence’ is the development of a technology called large language models (LLMs). These models are able to present the outputs of the statistical learning tools in natural flowing human readable (or listenable) narrative language – i.e. they write and talk like a human. Chat-GPT being the most celebrated example. I wrote about them about 5 years ago (The Story of Digital Medicine) – at which point they were an emerging technology but have since become mainstream and extremely effective and powerful.

Danger Ahead!

Here lies the risk in the hype – and the root cause of some of the anxiety about AI articulated in the press. Just because something talks a good talk and can spin a compelling narrative – doesn’t mean it is telling the truth. In fact quite often Chat-GPT will produce a well crafted beautifully constructed narrative that is complete nonsense. We shouldn’t be surprised by this really – because the source of Chat-GPT’s ‘knowledge’ is ‘The Internet’ – and we all have learned that just because its on the internet doesn’t mean its true. Most of us have learnt to be somewhat sceptical and a bit choosy over what we believe when we do a Google search – we’ve learnt to sift out the ads, not necessarily pick out the first thing that Google gives us and also to examine the sources and their credentials. Fortunately Google is able to give us quite a lot of the contextual information around the outputs of its searches that enables us to be choosy. Chat-GPT on the other hand hides its sources behind a slick and compelling human understandable narrative – a bit like a politician.

The Power of Data

In 2011 Peter Sondergaard – senior vice president at Gartner, a global technology research and consulting company – declared “data eats algorithms for breakfast”. This was in response to the observation that a disproportionate amount of research effort and spending was being directed at refining complex machine learning algorithms yielding only marginal gains in performance compared to the leaps in performance achieved by feeding the same algorithms more and better quality data. See ‘The Unreasonable Effectiveness of Data

I have experienced the data effect myself – back in 1998/99 I was a research fellow in the Birmingham School of Anaesthesia and also the proud owner of an Apple PowerBook Laptop with (what was then novel) a connection to the burgeoning internet. I came across a piece of software that allowed me to build a simple 4 layer neural network – I decided to experiment with it to see if it was capable of predicting outcomes from coronary bypass surgery using only data available pre-operatively. I had access to a dataset of 800 patients of which the majority had had uncomplicated surgery and a ‘good’ outcome and a couple of dozen had had a ‘bad’ outcome experiencing disabling complications (such as stroke or renal failure) or had died. I randomly split the dataset into a ‘training set’ of 700 patients and a ‘testing set’ of 100. Using the training set I ‘trained’ the neural network – giving it all the pre-op data I had on the patients and then telling it if the patients had a good or a bad outcome. I then tested what the neural network had ‘learned’ with the remaining 100 patients. The results were ok – I was quite pleased but not stunned, the predictive algorithm had an area under the ROC curve of about 0.7 – better than a coin toss but only just. I never published, partly because the software I used was unlicensed, free and unattributable but mainly because at the same time a research group from MIT in Boston published a paper doing more or less exactly what I had done but with a dataset of 40,000 patients – their ROC area was something like 0.84, almost useful and a result I couldn’t come close to competing with.

Using AI Intelligently

So what does this tell us? As practicing clinicians, if you haven’t already, you are very likely in the near future to be approached by a tech company selling an ‘AI’ solution for your area of practice. There are some probing questions you should be asking before adopting such a solution and they are remarkably similar to the questions you would ask of any research output or drug company that is recommending you change practice:

  1. What is the purpose of the tool?
    • Predicting an outcome
    • Classifying a condition
    • Recommending actions
  2. What type of algorithm is being used to process the data?
    • Supervised / Unsupervised
    • Classification / Logistic regression
    • Decision Tree / Random Forrest
    • Clustering
  3. Is the model fixed or dynamic? i.e. has it been trained and calibrated using training and testing datasets and is now fixed or will it continue to learn with the data that you provide to it?
  4. What were the learning criteria used in training? i.e. against what standard was it trained?
  5. What was the training methodology? Value based, policy based or model based? What was the reward / reinforcement method?
  6. What was the nature of the data it was trained with? Was it an organised labeled dataset or disorganised unlabelled?
  7. How was the training dataset generated? How clean is the data? Is it representative? How have structural biases been accounted for (Age, Gender, Ethnicity, Disability, Neurodiversity)?
  8. How has the model been tested? On what population, in how many settings? How have they avoided cross contamination of the testing and training data sets?
  9. How good was the model in real world testing? How sensitive? How specific?
  10. How have they detected and managed anomalous outcomes – false positives / false negatives?
  11. How do you report anomalous outcomes once the tool is in use?
  12. What will the tool do with data that you put into it? Where is it stored? Where is it processed? Who has access to it once it is submitted to the tool? Who is the data controller? Are they GDPR and Caldecott compliant?

Getting the answers to these questions are an essential pre-requisite to deploying these tools into clinical practice. If you are told that the answers cannot be divulged for reasons of commercial sensitivity – or the person selling it to you just doesn’t know the answer then politely decline and walk away. The danger we face is being seduced into adopting tools which are ‘black box’ decision making systems – it is incumbent on us to understand why they make the decisions they do, how much we should trust them and how we can contribute to making them better and safer tools for our patients.

An Intelligent Future

To be clear I am very excited about what this technology will offer us as a profession and our patients. It promises to democratise medical knowledge and put the power of that knowledge into the hands of our patients empowering them to self care and advocate for themselves within the machinery of modern healthcare. It will profoundly change the role we play in the delivery of medical care to patients – undermine the current medical model which relies on the knowledge hierarchy between technocrat doctor and submissive patient – and turn that relationship into the partnership it should be. For that to happen we must grasp these tools – understand them, use them intelligently – because if we don’t they will consume us and render us obsolete.

I have read two stories this week.

The first was written in an interesting, contemporary literary style – you know the sort – short sparse sentences almost factual, leaving lots of ‘space’ for your own imaginative inference, not making explicit links between facts and events but leaving you to do that for yourself.  It was a love story, rather charming and quite short, describing a familiar narrative of boy meets girl, invites her to the cinema and they fall in love (probably).  It could be described as Chandleresque in style – though it isn’t that good – in fact it could have been written by an 11+ student.  It wasn’t though – it was in fact written by a computer using a form of artificial intelligence called natural language generation with genuinely no human input.  You can read how it was done here.

The second story I read is a description of a falling out of love – of the medical profession with the IT industry and the electronic patient record.  This one is very well written by Robert Wachter and is a warts and all recounting of the story of the somewhat faltering start of the digital revolution in healthcare.  It is called ‘The Digital Doctor’ and I would highly reccomend you read it if you have any interest in the future of medicine.  It is not the manifesto of a starry eyed digital optimist, nor is it the rantings of a frustrated digital skeptic – he manages to artfully balance both world views with a studied and comprehensive analysis of the state of modern health IT systems.  His realism though extends to understanding and articulating the trajectory of the health IT narrative and where it is taking us – which is a radically different way of delivering medical care.  I won’t use this blog to precis his book – its probably better if you go and read it yourself.

From Data to Information to Understanding

The falling out that Dr Wachter describes really is quite dramatic – this is the United States the most advanced healthcare system in the world – yet there are hospitals in the US that advertise their lack of an EPR as a selling point to attract high quality doctors to work for them.  Where has it gone wrong?  Why is the instant availabilty not only of comprehensive and detailed information about our patients but also a myriad of decision support systems designed to make our jobs easier and safer to carry out – not setting us alight with enthusiasm?  In fact it is overwhelming us and oppressing us  – turning history taking into a data collection chore and treatment decisions into a series of nag screens.

The problem is there is just too much information.  The healthcare industry is a prolific producer of information – an average patient over the age of 65 with one or more long term conditions will see their GP (or one of her partners) 3 – 4 times a year, have a similar number of outpatient visits with at least 2 different specialists and attend A&E at least once.  That doesn’t include the lab tests, x-rays, visits to the pharmacy, nursing and therapy episodes.  Each contact with the system will generate notes, letters, results, reports, images, charts and forms – it all goes in to the record – which, if it is a well organised integrated electronic record, will be available in its entirety at the point of care.

Point of care being the point – most health care episodes are conducted over a very short time span.  A patient visiting his GP will, if he’s lucky, get 10 minutes with her – it doesn’t make for a very satisfactory consultation if 4 or 5 of those minutes are spent with the doctor staring at a screen – navigating through pages of data attempting to stich together a meaningful interpretation of the myriad past and recent events in the patient’s medical history.

How it used to be (in the good old days)

So what is it that the above mentioned hospitals in the US are harking back to in order to attract their doctors?  What is the appeal of how it used to be done when a consultation consisted of a doctor, a patient and a few scrappy bits of paper in a cardboard folder?  Well for a start at least the patient got the full 10 minutes of the doctors attention.  The doctor however was relying on what information though?  What the patient tells them, what the last doctor to see them chose to write in the notes, and the other events that might have made it into their particular version of this patient’s health record.  This gives rise to what I call a ‘goldfish’ consultation (limited view of the whole picture, very short memory, starting from scratch each time).  We get away with it most of the time – mainly because most consultations concern realtively short term issues – but too often we don’t get away with it and patients experience a merry go round of disconnected episodes of reactive care.

IMG_0477

As a practitioner of intensive care medicine one of the things that occupies quite a lot of my time as ‘consultant on duty for ICU’ is the ward referral.  As gatekeeper of the precious resource that is an intensive care bed my role is to go and assess a patient for their suitability for ICU care as well as advise on appropriate measures that could be used to avert the need for ICU.  My first port of call is the patient’s notes – where I go through the entire patient’s hospital stay – for some, particularly medical patients, this might be many days or even weeks of inpatient care.  What I invariably find is that the patient has been under the care of several different teams, the notes consist of a series of ‘contacts’ (ward rounds, referrals, escalations) few of which relate to each other (lots of goldfish medicine even over the course of a single admission).  I have ceased to be surprised by the fact that I, at the point of escalation to critical care, am the first person to actually review the entire narrative of the patient’s stay in hospital.  Once that narrative is put together very often the trajectory of a patient’s illness becomes self evident – and the question of whether they would benefit from a period of brutal, invasive, intensive medicine usually answers itself.

Patient Stories

The defence against goldfish medicine in the ‘old days’ was physician continuity – back then you could  expect to be treated most of your life by the same GP, or when you came into hospital by one consultant and his ‘firm’ (the small team of doctors that worked just for him – for in the good old days it was almost invariably a him) for the whole admission.  They would carry your story – every now and then summarising it in a clerking or a well crafted letter.  But physician continuity has gone – and it isn’t likely ever to come back.

The EPR promised to solve the continuity problem by ensuring that even if you had never met the patient in front of you before (nor were likely ever to meet them again) you at least had instant access to everything that had ever happend to them – including the results of every test they had ever had.  But it doesn’t work – data has no meaning until it is turned into a story – and the more data you have the harder it is and longer it takes to turn it into a story.

And stories matter in medicine – they matter to patients and their relatives who use them to understand the random injustice of disease, it tells them where they have come from and where they are going to.  They matter to doctors as well – medical narratives are complex things, they are played out in individual patients over different timescales – from a life span to just a few minutes, each narrative having implications for the other.  Whilst we don’t neccessarily think of it as such – it is precisly the complex interplay between chronic and acute disease, social and psychological context, genetics and pathology that we narrate when summarising a case history.  When it is done well it can be a joy to read – and of course it creates the opportunity for sudden moment when you get the diagnostic insight that changes the course of a paient’s treatment.

Natural Language Generation

Turning the undifferentiated information that is a patients medical record – whether paper or digital – into a meaningful story has always been a doctor’s task.  What has changed is the amount of information available for the source material, and the way it is presented.  A good story always benefits from good editing – leaving out the superfluous, the immaterial or irrelevant detail is an expert task and one that requires experience and intelligence.  You see it when comparing the admission record taken by a foundation year doctor compared to an experienced registrar or consultant – the former will be a verbatim record of an exchange between doctor and patient, the latter a concise inquisition that hones in on the diagnosis through a series of precise, intelligent questions.

So is the AI technology that is able to spontaneously generate a love story sufficiently mature to be turned to the task of intelligently summarising the electronic patient record into a meaningful narrative? Its certainly been used to that effect in a number of other information tasks – weather forecasts and financial reports are now routinely published that were drafted using NLG technology.  The answer of course is maybe – there have been some brave attempts – but I don’t think we are there yet.  What I do know is that the progress of AI technology is moving apace and it won’t be very long before the NLG applied to a comprehensive EPR will be doing a better job than your average foundation year doctor at telling the patient’s story – maybe then we will fall back in love with EPR? Maybe…

Over the next 2 years many NHS Hospitals will be replacing electronic health record (EHR) systems as the contracts born out of the national program for IT (NPfIT) come to an end. They are doing so amid a noisy revolution in healthcare informatics – which is demanding that we completely reframe not just our ideas about the EPR itself but also the nature of healthcare delivery and the traditional medical model.

I have previously talked about the cumbersome desk bound IT systems that have been as much an impediment to the process of care as a source of misery for the users of these systems. Why is there such a mismatch in expectations of healthcare staff and the developers of the tools supposedly there to help them? I don’t believe we can blame the developers of the systems – or for that matter the users of them. I believe that it has come about because we have all failed to understand the true nature of modern medicine and it has taken the social media revolution to wake us up to the fact.

Transactional Healthcare

All our current systems – IT, contracting, measuring, counting and operational delivery – are designed around the notion that healthcare is a series of individual transactions, each transaction taking place under the auspices of an individual clinician (e.g. the finished consultant episode – FCE). The overriding assumption of this model is that healthcare is episodic, that between episodes we have no contact and no need for contact with the healthcare system and that each episode is presided over by a clinician that has, if not control, then at least knowledge of what is happening. It also assumes that of the information required to manage an episode of care only a proportion of it is of relevance to future episodes – and that episodes of care have largely self contained information needs.

Notwithstanding this, what a typical patient might consider a single episode of healthcare actually consists of large numbers of individual transactions often with several different organisations and several different parts of the same organisation (GP, Outpatient department, Radiology, Pathology etc.). The ‘system’ has fragmented itself – for its own purposes – and has at the same time failed to provide a reliable mechanism for the sufficient sharing of information between its different parts to give any sense of continuity – let alone impression of competence – to the hapless patient.

20131226-221457.jpg

The NHS far from being a single organisation is an aggregation of large and small institutions, thrown together in a pseudo-market notionally in competition with each other – with strict rules that prevent them from collaborating (competition law) and sharing information (data protection). It is designed to fail to meet the continuity challenge and the expectations of patients. It is also unsafe – with manifold opportunities for vital information communication failure, a common root cause of critical incidents and serious complaints.

This transactional model has developed out of the medical model that assumes patients have singular diagnosable diseases that are amenable to technological intervention (pharmaceutical or surgical) and that life long cure is the outcome. This is certainly the case in a proportion of healthcare interactions – but medicine and patients are changing. The vast majority (in excess of 70%) of healthcare delivery is for the multiply co-morbid patient with conditions that they will live with for the rest of their lives. There may indeed be episodic deteriorations in their condition, that result in (expensive) episodes of highly technological healthcare – these are however often both predictable and avoidable manifestations of a long term condition for which there is a continuous healthcare need.

Social Healthcare

A new medical model – that fits the needs of 21st century patients – is emerging in the era of social networking. Social Healthcare – this model assumes that the patient has a continuous need for health intervention and that this is delivered by a network of providers that the patient invites in to their ‘health space’. A long term condition demands a multitude of interactions over a sustained period of time with many providers. Our current system fails to ensure that those providers interact with each other over the same patient using the same information as it cannot guarantee that they ever have access to the same information (or even know that they are looking after the same patient). In the social healthcare model the patient is the guardian of all information about themselves. Their information sits in ‘the cloud’ but they hold the key to its access. Over time they build a personalised health care team who have access to their cloud data and can ‘talk’ to each other about the patient through his or her ‘health space’. The patient can source information about their condition, ask advice or consult through the portal to their health space at any time of day or night. The portal allows for direct consultation through a skype like interface. Even more powerfully the patient can consult with other patients who have the same conditions. A self supporting community emerges that can provide advice, support and shared experience – on a continuous basis. “It’s like having a waiting room conversation with 1000 other people – just like me”.

20131226-221511.jpg

This new model has the potential to have a powerful enabling impact on patients. It subverts the traditional hierarchy in the health transaction, puts all providers on an equal footing below that of the patient – who becomes the master of their own information. Networks of expert patients start to generate new kinds of knowledge about the nature of disease and the impact of interventions – crowd sourced evidence creating medicine. Patients have the power to choose who contributes to their health space – based on the value they add to them as individuals.

Preparing to be part of the Crowd contributing to the Cloud

What does this mean for providers of health care that are in the process of renewing their informatics systems? What does this mean for me as an individual doctor? The truth is the system described above doesn’t yet exist – there is nothing yet that will force me as a doctor to change my clinical practice in such a way as to make myself available through the cloud to individual or groups of patients on a continuous basis. There is nothing yet that will force the hospital that I work for to make available all the information it holds about patients to patients through a cloud portal – or share that information with other providers of that patient’s choosing. Nothing yet – but it is coming…

Third party providers of patient centred health portals are are emerging – from a variety of premises and care models – but are converging on something that looks similar to the ‘social healthcare’ system. Patients like me, Patients Know Best, Health Fabric and Skype Health are all examples of emerging technologies that will deliver the vision. And they are selling their wares not to doctors, healthcare institutions or commissioners – they are selling them to patients.

20131227-190745.jpg

What this means then is that the systems we purchase or develop will need to have the ability to talk to these providers, transfer information, support voice and video links. We will have to think through how the information outcomes of healthcare transactions will be recorded in a way that can meaningfully sit in a patient’s cloud – shareable with other providers and understandable by patients.

What this means for doctors is a shift from practicing intermittent transactional healthcare – to developing a personal online continuous relationship with our patients and being part of and interacting with the crowd of providers that are also caring for them.

Welcome to the future of medicine – are you ready?

20131227-190826.jpg

In a book chapter I wrote on the subject of information management in critical care, I concluded that one of the most important challenges for this generation of doctors is the transfer of clinical information management from paper to electronic systems. So far we have failed that challenge, the vast majority of clinical information is still being recorded and managed (rather poorly) on paper. Those parts that are managed electronically are, in general, still cumbersome, bespoke systems that serve functions other than the delivery of clinical care far better than the needs of doctors, nurses or even patients. As a result a lot of these systems are at best grudgingly tolerated, often despised and sometimes even avoided altogether. The majority of doctors, with the exception of the minority enthusiasts, have withdrawn from the conversation on development of information management systems (or even been left out altogether) because it has been seen as a technological challenge rather than a clinical one. This is wrong and has to change because the way we manage clinical information is a crucial enabler for radical change in health care delivery. If doctors fail in this challenge we will find ourselves marginalised and obsolete in an ‘innovatively disrupted’ health economy.

Early Adopters

There is, of course, some history here which partly explains our current situation. Electronic clinical information systems have been in existence for over twenty years. The early years of the development of these systems was dominated by the technological challenges. The sheer volume and complexity of information that is collected in the course of delivering clinical care was a challenge when the cost of electronic storage was high and networking infrastructure not well developed. Taming the complexity of the information – codifying it and structuring it so that it could ‘fit’ in a conventional database – was not only difficult but also met with resistance of professionals as it constrained practice and the PC / workstation became a barrier between doctor and patient. Despite these challenges there are examples of hospitals and hospital systems that showed the world how it could be done (Burton Hospital being a notable example in the NHS) and also how it could go wrong.

The Lost Decade

If the nineties was the pioneering decade for clinical information systems then the first decade of this century can only be characterised as the ‘lost decade’ – whilst the Internet flourished and the age of distributed, personalised, world-in-your-pocket computing dawned – hospital IT systems remained desk-bound, cumbersome, inflexible, centralised systems. The need for information sharing was misinterpreted as a need to provide a single solution for all. A strategy that has cost billions, failed to deliver and diverted funding and more importantly the engagement of the medical profession (it was often doctors with IT skills that where the pioneers of the early adoption period) away from user and patient centred solutions.

A Tablet Ushers in a New Era of Medicine

Technology is no longer the problem – storage is cheap and abundant, networks are reliable and fast and devices are powerful, intuitive and mobile. Data management has transformed as well. XML allied to sophisticated search algorithms means less taming of information is required, the structure of the ‘database’ need not trouble the user any longer. Cloud technology means that information can be kept absolutely secure whilst not compromising the freedom of permitted users. The technology really has come of age and has surpassed the specification required to deliver clinical information management that truly serves the needs of patients, doctors and managers. Mobile devices like the iPad can give doctors both tools for information gathering and the tools to access it when it is needed without the technology getting in the way of the transaction with the patient.

Paper, Paper Everywhere!

But we are still using paper – tons of it. Medical records are stuffed with cardboard folders bursting with, mostly useless, pieces of paper. The information is locked away, unstructured and inaccessible – every request for information (and there are lots) is a mountainous struggle, consuming hundreds of man hours to extract it. The functions of the paper medical record as care coordinator, communicator, clinical process manager, monitor and legal witness are all conflated and result in an extreme precautionary approach to the retention of information which completely subsumes the probably more important function as informant almost as important (and often more informative) as the patient themselves.

It’s the Information Stupid

It’s time for the conversation to move from the technology to the information. We must focus on the type of information we gather, how we gather it, what we need and when we need it in order to deliver safe effective care. So much duplication and iteration and re-iteration of clinical information has evolved as a defence against the in-accessibility to information. Most patients I have met are astonished at the number of times they are asked the same questions over and over again even within the same clinical episode – they see the duplication and fragmentation that we as professionals miss.

The care we give our patients is complicated and messy – partly because our patients are complicated and inflict on us huge variance in presentation, severity, comorbidity and response to treatment. That is the nature of medicine and what makes it so all consumingly interesting. But we make life exponentially more difficult for ourselves by imposing our own variance in practice and reliability on this already unpredictable background. Doing it differently every time, sometimes even changing our mind half way through results in variance on variance which is the definition of chaos. Chaotic medicine results in unpredictable, usually poor, outcome and huge waste – and is bad medicine.

There is an answer to the information problem which also solves the chaos problem and results in not just better care but dramatically better care. Healthcare organisations that adopt this solution are not only better than their peers they are exponentially better. The solution is the key to delivering reliable care and it is the Clinical Process Model. This will be the subject of my next blog.

It is interesting to reflect – now that the PFI bonanza has come to an end and we all have to hunker down and work out how to pay for it for the next 30 years – on what we have spent all the money on and consider whether what we have thrown up around the land is actually what we need.

This paper by the think tank Reform The Hospital is Dead Long Live The Hospital is an eloquent exposition of Clayton Christensen’s ‘Innovator’s Prescription’ within an NHS context. The essential conclusion of both of these is that Hospitals need to move from being ‘A place where sick people go’ to becoming ‘An organisation that keeps people well’. This re-framing of purpose prompts the question – what does a hospital that keeps people well look like? I suspect it is not a large building with lots of beds in it (or clinic rooms for that matter).

Interestingly the specialty of Intensive Care Medicine underwent a similar re-framing of purpose over ten years ago as a result of the comprehensive critical care program in response to a lack of intensive care beds. The outcome of this process was the introduction of critical care outreach teams (or medical emergency response teams) linked to a system of population surveillance (MEWS track and trigger) and an expansion of lower acuity beds (high dependency). There were almost no additional intensive care beds commissioned or provided. The result has been intensive care units have been able absorb ten years of demand growth, almost eliminate the need for inter hospital transfer for capacity reasons, reduce futile care, contain costs and improve outcome.

How do we replicate this operating model at the scale of the hospital within a health economy (as opposed to an intensive care unit in a hospital)? The essential elements are:
1) Knowing the population you are caring for – a disease registry
2) Knowing how they are – a simple method of measuring disease status
3) A response team that averts crisis when a trigger threshold is reached – a specialist community team
4) An escalation pathway that includes rapid access to specialist input – specialty hubs
5) Lower acuity beds for step up or step down care – intermediate care beds
6) Alternate pathways for those that acute care is inappropriate – end of life services
7) Acute beds for those that genuinely need it – closely linked to an intensive care unit!

This distributed model of care does still need buildings – but what it needs more is intelligent information and communication systems used by a workforce that understands the need to keep patients other than those in genuine need away from hospital. It also needs an operating system that measures its impact, analyses unexpected pathway deviance and learns from system failure.

Eliminating the huge waste in the system of inappropriate and futile hospital care (both inpatient and outpatient) will not only deliver cost savings it will improve quality of care and outcomes and create the capacity we need for the growth in demand we know is coming.

The hospital is no longer a building it is a healthcare delivery system. We should be investing in the infrastructure that makes it possible – And that is not bricks and mortar…