Artificial Intelligence

Does your NHS trust need AI?

21st June 2022
Kiera Sowery
0

Kiera Sowery caught up with Dr. Neelan Das, Consultant Interventional Radiologist, Kent & Canterbury Hospital, Canterbury to discuss AI, and whether it should be utilised by NHS trusts.

How can AI help solve healthcare delivery challenges?

10 years ago, there was a lot of hype around the use of artificial intelligence (AI) in healthcare settings. If you follow the Gartner hype cycle, the use of AI is progressing from the slope of enlightenment to the plateau of productivity.

Today, AI is being used in limited capacities to help with workflow prioritisation, but there is a clear use case for it as it frees up time for clinicians , making AI an aid to efficiency and decision-making. One scenario in which this works well is in providing initial diagnosis, which is then verified by a clinician. In areas such as imaging diagnostics,  algorithms can be trained and targeted to identify specific abnormalities more easily than in other areas of practice.

It is also making great strides in research, as there is a role for machine learning/deep learning to help increase the efficiency of clinical trials. Historically, clinicians would have to manually label data; this can now be spd up and made more accurate through automated labelling.

Longer-term, there are a lot of areas AI can move into, but much of this will depend on having more comprehensive electronic patient records across the NHS. With these in place we’ll be able to do more analysis of anonymised patient data and implement applications of AI.  

What is the application of AI in medical imaging, operational task automation and patient monitoring/treatment planning?

Each of these tasks is significant, but there are some principles which are similar across all of them. Fundamentally, whatever the algorithm is designed to do, it will be an aid to the clinician to speed up processes. For example, historically when reviewing cardiac MRI scan the clinician would draw circles on every slice in the heart manually, whereas with machine learning the software can learn to automate this process with training from the clinician. Previous software, such as CAD/CAM, would have to be corrected by the operator, which could end up being as time consuming as the manual process.

We have been working with Qure.ai using its deep learning and AI-based solutions for early disease detection metrics in radiology. Based on this experience and other work we are doing with AI, there is a case for Chest X-Rays AI to help with workflow prioritisation. The software can automate the process of telling the clinician if a finding is normal or abnormal and over time, as you build up confidence in the software, this can help clinicians to prioritise which X-Rays to review first. In time, I could see a scenario where if you are confident about which X-Rays are normal and don’t have any signs of abnormalities, that you would not have to review every single image; rather, you could conduct an audit of a percentage of the images. This would ease the burden on radiology teams as applying the same analysis to X-Rays with and without abnormalities is time consuming when you’re looking to improve patient outcomes.

There is also potential for those Trusts who may not have a radiologist on hand to review X-Rays immediately can use AI to automate image analysis, so that clinical staff can prioritise those images with apparent abnormalities. This is possible if the AI has been well-trained with a low false negative rate, as this will give clinicians confidence in their decision making.

For instance, consider the study we did with Qure.ai’s solution - qXR. By stratifying chest radiographs as normal or abnormal, Qure's solution may reduce reporting backlogs in a hospital setting and could result in earlier intervention for patients.

Our evaluation of service demonstrated that qXR can interpret the results of normal, abnormal and to-be-reviewed chest X-rays with high accuracy across a variety of clinical settings, and with the potential to improve reports. qXR had high sensitivity and high NPV on scans originating from across A&E, GP referrals, inpatient and outpatient departments. There was a substantial agreement of qXR with both our radiologists on study datasets and higher agreement between qXR and both radiologists than between the radiologists themselves, giving some evidence for qXR to perform as well as an independent radiologist.

A further opportunity relates to the millions of scans of lung nodules completed every year. Not every image requires immediate further investigation, but at times clinicians identify nodules that may be of concern and require the patient to be on a surveillance schedule. The patient must be scanned on a regular basis to see if the nodule had changed. The clinician measures it manually, only using one dimension, to see if there is a change. With AI it is possible to automate the measurement of nodules and use a variety of dimensions, allowing for more accurate interpretation. Consequently, it means the radiologist/clinician has more time for decision-making.

In the future, with the right research into the health economics of adopting AI it should also be possible to demonstrate not just how the technology helps to reduce errors, but also reduce the overall cost of care.

How does someone choose the right solution for a clinical problem?

Broadly speaking, there are several basic questions you should address, but I would emphasise that the reasons for adopting AI will differ depending on the clinical problem.

The first question should be: “Do you really need an AI solution for your clinical problem?”, accompanied by “Do you know what your clinical problem is?”

These questions may sound obvious, but it is important not to be sucked into the hype around AI and believe it is a silver bullet to solve existing challenges. You can qualify this further by assessing what type of problem you think you may have.

For example, it could be a question of logistics which does not necessary create a business case for AI. If the concern is that the radiology department is not turning scans around fast enough it might not necessarily be caused by the workflow in the radiology department. It could be that consultants elsewhere are not requesting scans in a timely manner and the diagnosis process is delayed as a result. There may be a lack of clinical expertise, or the volume of imaging requests is too huge. AI will not necessarily solve all these issues.

AI should be seen as part of a whole system change, not the whole solution.

How can AI be adopted in clinical practice?

Based on my personal experience of adopting AI in my area of the country and part of the NHS, I would say that adoption is a rigorous process. Understandably there is still a lot of caution among medical professionals and support staff around AI.

If you want to implement an AI solution you should be realistic about its capabilities. You should be aware of the potential for the AI to have biases and what risks the software might present. You also need to have confidence in the integrity of the company developing the software, especially in its commitment to implementing the software in real world environments like the NHS, which is unlike most other settings.

You also need to separate “marketing speak” from information about actual product functionality to ensure you understand its capabilities and how it will perform. You should be wary of any company that doesn’t give you much detail on its software, as transparency is critically important when it comes to understanding the AI decision-making process.

It is also very important to educate your team and understand it must include people that you hadn’t previously considered likely colleagues for such an implementation. The team could include IT directors, clinical managers, nurses, patients, procurement teams and business development managers.

You must educate all of these stakeholders and convince them of the reasons for the investment. Especially for those with procurement mindsets it will be important to explain why AI is part of the future of medicine, and that it is going to make a difference to patient care.

Ultimately, this can be tough, as you are trying to break down inertia among individuals who have been reliant on one way of working for many years. This must be viewed as a change management exercise, which is rarely to do with the software, it's all about people.

How likely is AI to go wrong in this application?

This wouldn’t be how I’d approach this question from a medical perspective, as in radiology we don’t talk about something being right or wrong. With a scientific hat on you must ask what does right or wrong actually mean in your context? In your clinical team you must agree what your threshold for right or wrong is, as a chest X-Ray using an AI algorithm could overcall an abnormality providing a false positive. That is not so much  concern right now, as the X-Ray would be checked by a clinician, so you could catch an inaccurate analysis.

To give you a sense of what right and wrong might look like, we did an evaluation  offline of 1,000 X-Rays which were analysed using AI and came out with 1% error rate. This is very similar to the scores for human radiologists, so the difference between typical human responses and the AI was minimal in this particular instance. Understanding whether to evaluate the sensitivity, specificity, Appropriate Use Criteria (AUC) or Negative predictive value of an algorithm is crucial when implementing AI to solve your specific clinical problems.

What is critically important is that you must have ways to surveil the system and make sure it is performing according to the benchmarks set up at the start of the process. I would suggest the most effective way to do post market surveillance is to have both the user and vendor working collaboratively to ensure the most effective results.

There are ethical issues to be aware of in this approach, which is why you must be confident in the vendor and its willingness to be transparent with its data. A good indicator of that is whether the vendor is prepared to publish papers for peer review and how they react to that feedback. That is something I looked for when engaging with Qure.ai - as they already had more than 30 reviewed papers, it added to our confidence.

What does the future of AI look like for the NHS?

I would divide this into three phases:

  • Short-term: Today, we are seeing a few innovative or pioneering Trusts that are willing take on the workload of implementing an algorithm, such as the work East Kent has undertaken with Qure.ai. These Trusts should then share the knowledge and data with the rest of the NHS. At this stage, the NHS has a crucial role to ensure learnings from implementing these innovations are shared, but in time this should lead to a snowball effect.
  • Mid-term: Adoption will begin to snowball when we have an infrastructure in place to enable testing at scale in real world environments. Currently, there is the London Medical Imaging and AI Centre for Value Based Healthcare a consortium of 16 NHS Trusts funded by a £16m NIHR grant which is setting up the infrastructure to make it easier to develop algorithms in-house or for vendors to refine their algorithms within the framework of the NHS. This means the AI can be tested with real world data in a secure environment while patient data is protected.
  • Long-term: It all depends on how quickly we reach artificial general intelligence, as it will determine how quickly machines can learn and adapt. If we get this right (and that could be a big IF), a long time in the future we could potentially see humans merging with machines and there might be ambient related applications.

Featured products

Product Spotlight

Upcoming Events

View all events
Newsletter
Latest global electronics news
© Copyright 2024 Electronic Specifier