AI in healthcare is taking over – for the better

Could the use of AI in healthcare be the solution our health service desperately needs?

The NHS continues to face crisis after crisis. Former Health Minister Dan Poulter recently defected from the Conservative Party, citing the current Government’s failings with the health service as his motivation.

A recent British Social Attitudes Survey found satisfaction levels toward the NHS have dropped to record lows of just 25% in 2023. However, it’s not the service’s fault.

The health system has been struggling to shorten the elective backlog since the pandemic. The length of waitlists and staff shortages are the most referenced reasons for concern.

The population is living longer and experiencing more complex diseases. It’s estimated that hospitals will need to cope with a 40% increase in demand in the next 15 years, according to the IFS. With waitlists currently nearing a million patients, a lot of change needs to happen to accommodate the added pressures and increase the volume and variety of patient conditions.

Trust in the NHS continues to fall; however, trust in AI is slowly climbing. It’s becoming a buzzword around the workforce, but AI in healthcare could be life-saving.

Large Language Models (LLMs) can understand and ‘write’ text, interpreting vast amounts of data quickly and automating what would otherwise be time-consuming manual tasks.

The healthcare system needs to support the ecosystem of patients, doctors, nurses and administrative staff to be as efficient and joined up as possible. Combining AI agents and optimising models can drive productivity in the NHS and allow for patients to be seen quickly.

How reliable is AI in healthcare?

AI models do have flaws. Traditional AI models can predict sequences, look for anomalies, or categorise items, but their success is measured by statistical analysis of how often they were correct. LLMs, built using machines, can work by predicting an outcome. And while they are often right, they are sometimes wrong.

For those who are worried about AI in healthcare, a better approach is to focus on the error rate of an AI model. It’s not enough for AI to simply match the error rate of humans. To build the necessary trust, it needs to be 10x, 100x, or possibly 1000x safer.

In healthcare, not all situations are equal in terms of risk, and some AI uses are well established. For example, a trained doctor or nurse will be familiar with the need to challenge AI generated analysis of diagnostic images in a way a member of the public wouldn’t.

AI can improve communication

So much of healthcare is a communication problem, whether patients struggle to get through on the phone or doctors write notes for their colleagues.

AI in healthcare will rapidly improve this through summarising, categorising, transcribing, translation, and voice. The technology that is rolling out across contact centres and help desks in other industries will help lift the pressure from overstretched booking teams and improve patients’ experiences.

Learning from patient lessons

Whilst LLMs are trained on vast volumes of data from the internet to tune their billions of parameters, they have had rather limited ‘context windows’—the size of the input you can enter to get your result out. That is changing rapidly, and they can now assimilate even the thickest electronic patient notes files—critical for efficiency gains in healthcare.

‘Transformer’ models have a stage called ‘attention mechanism’, which learns how different inputs are related to each other. This might help the model understand that the words ‘big cat’ are closely related to ‘lion’, or in a model trained more medically, it can help it understand the interactions of different drugs.

With greater digitisation of medical records, we’ve been able to bring in a number of automated rule sets that the systems apply for things such as medicines and allergies. These rules work where an item has been coded in the Electronic Health Record. But note that more information resides in the free text documents that make up the bulk of a patient’s file. AI models will enhance this to analyse the patient’s notes and medical history and to flag things that may have been overlooked.

Give your assistant a goal

LLMs today excel at knowledge-based tasks. They can understand your intent and context and generate good responses. The focus now is on how to make them better at reasoning tasks. These typically involve the model creating a series of subtasks towards their goal called a chain of thoughts. As they act on each subtask, they may then update their chain of thought based on observations.

This method is powerful when the model is given skills and access to APIs. However, in the future, these assistants could coordinate and arrange new patient referral bookings while keeping the patient informed and also managing results back from diagnostics.

Regulate first, run second

While a few years ago, developing the list above would have seemed a fantasy, AI technology is now both trustworthy and trusted.

While clinicians and patients are able to drop out of adopting AI and LLM systems, the option stands for Trusts to digitise. If the overarching Trust is more productive, this will allow for patients who opt out to still be supported. It will also save time among staff.

While the healthcare system is struggling, Trusts need to do what they can to invest in improvements and help their patients be seen by the right carers at the right time.

Contributor Details

Subscribe to our newsletter

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Topics

Partner News

Advertisements



Similar Articles

More from Innovation News Network