There is general agreement that artificial intelligence (AI) has the potential to transform health care. However, there is also little doubt that this process has not yet really started. Hospitals and health care providers have been relatively slow to adopt AI solutions on a large scale.

Beyond automation

There are some small-scale and isolated examples of the use of AI, but they tend to be fairly straightforward. For example, one project in Denmark is using the technology to help improve diagnosis in emergency departments in hospitals. Algorithms analyse the results of diagnostic tests and use them to calculate the probability of a particular disease, helping doctors to make a "most likely" decision. Elsewhere, AI is being used to improve screening – for example, for cervical smear samples or liver scans to detect potential cancers.

There are a number of reasons why AI has not really taken off in health care yet. One reason may be that it is complex. Part of the reason why the most successful projects have been fairly simple could be a matter of not running before you can walk. In both the examples above, AI is being used to do what it does best, and what humans find both difficult and boring: work through large amounts of data and detect patterns. Unlike people, algorithms do not get bored or distracted, meaning that they do not miss the one-in-ten-thousand anomaly, or see things that are not really there.

Beyond black boxes

However, there are still issues. Health care providers and clinicians need to be able to understand why the algorithm has made a particular recommendation or suggestion. If they do not understand, they will probably not accept the recommendation. For example, if doctors disagree with the suggested diagnosis, then they will be unlikely to go against their professional judgement. A disagreement may prompt them to ask more questions – or it might simply lead them to stop using the technology. What’s more, someone who does not understand or accept the algorithm’s suggestion will certainly not be able to explain it to the patient.

There are also still many questions about who is responsible for each algorithm. Should it be those who use it on a daily basis? Or the health care organization that employs them? Alternatively, should the data scientist who developed it continue to be responsible for its decisions? The likely answer is that each stakeholder has to be responsible for different elements. For example, health care practitioners may be best placed to ensure that data fed into the model is correct. The organization has overall responsibility for model governance but may need data scientists and IT teams to set out the required systems and maintain them.

Beyond the usual data sources

Difficult issues, however, are no reason to refuse to use AI to improve patient outcomes. The volume of data that is available to hospitals and health care providers is expanding exponentially. From wearables to sensors in equipment, this data is a huge and growing resource. Hospitals and clinicians need to embrace the opportunity to drive their decisions with data.

Nobody is suggesting that AI and analytics can – or should – replace clinicians. Clinicians themselves will recognize, however, that they need access to all the tools at their disposal to make the right decisions for, with and about their patients.

We all know that time is so often critical in medicine. For example, cancer survival rates are better with early detection and treatment. Stroke patients need to be given the right treatment within half an hour for optimum outcomes. We also know that reactions to medicines are very individual. Choosing the right medicine for a particular patient can make a huge difference to their speed of recovery and, for chronic conditions, quality of life. Side effects vary significantly between individuals.

Beyond inertia

In all these cases, using AI and advanced analytics could improve the decision-making process and lead to better patient outcomes. Algorithms offer the potential for personalized health care. We may not be able to move straight to these advanced uses, and, indeed, probably should not. Health care providers need time to get used to the way that AI algorithms work, and everyone needs to accept the use of data and the move towards fully data-driven decisions. Starting small and simple is probably best – but getting started offers the best route towards long-term improvements in patient outcomes.

Share

About Author

Joost Huiskens

Joost is Industry Expert for Healthcare and is based in the Netherlands. Joost has a Ph.D. in surgical and medical oncology. In his current role at SAS, Joost has the opportunity to connect data science, IT and clinical practice in order to bring analytics to the bedside. He is considered a thought leader in the future of health care and the introduction of patient focused technology.

Leave A Reply

Back to Top