The importance of ethical AI in the health care industry

0

Artificial intelligence (AI) is one of the most popular topics in the tech industry, along with IoT, cloud and blockchain, to mention just a few. Although it is a very promising technology, it is also connected with very high expectations, quite often beyond the capabilities that AI provides today.

As with every new technology, a lot of questions are raised about the benefits, as well as the risks that stem from AI misuse, either intentional or not. This is where ethical AI comes into the picture. From a philosophical point of view, ethics tries to define what is good and what is evil. Ethical AI tries to define a set of guidelines, both for humans who design, create and use AI systems and the expected behavior of algorithms executed by machines.

The importance of ethical AI in the health care industry
The importance of ethical AI in the health care industry.

When we look at the health care industry, we see multiple problems: increasing costs, lack of medical staff and growing demand for services. Technology can help us tackle all those challenges. And AI will play a vital role in transforming the health care industry as we know it. You can watch Forbes conference Artificial Intelligence and Ethics Mandate and hear from field experts sharing their experience to further address bias, algorithmic aspects, regulations and privacy protection. When I think about ethical AI, three key aspects come to mind: governance, fairness and explainability. Let me elaborate on that.

AI governance

Data is the oil of the 21th century, and it fuels artificial intelligence. However, as we generate and collect more data, privacy concerns arise. The Cambridge Analytica data scandal, for example, proved that given enough information, one can predict the behavior of people and influence individuals’ decisions for political gain. And all of this without the knowledge or consent of targeted people. In the world of health care, similar concerns arise. Let us focus on the health care space.

There are companies that provide genetic testing and collect databases of people’s digitalized DNA. This is extremely sensitive information that can do harm if it gets into the wrong hands. At the same time, these genetic data repositories may be invaluable to medical research. We need to protect privacy, but at the same time enable scientific progress.

Therefore, when it comes to using medical data, we need to provide appropriate governance, oversight and security measures. We must make sure to only use medical data within an agreed scope, accessed by authorized personnel and algorithms. We must also ensure that every use of this data goes through some objective validation to see if it has added value and has no potential to harm.

AI fairness

The learning part of machine learning can be only as effective as the quality, quantity and representativity of the data used for training the models. One common challenge is an underrepresentation of certain groups of the population in the data used for model training. It is often called AI bias. As a result, the AI model will often provide worse results for less-represented groups.

Data with a gender imbalance data will result in worse accuracy of the models for the underrepresented gender. Skin-cancer detection models trained mostly on light-skinned patients will perform worse on individuals with darker skin. For health care, poor model performance for a particular group may give unreliable information, leading to an incorrect diagnosis or suboptimal treatment.

Feeding AI models with biased data will introduce a systematic bias that we usually want to overcome. It is vitally important, given the sensitive nature of AI outcomes in health care, that models are trained on diversified data that is verified against potential underrepresentation of certain groups in the population.

AI explainability

Some of the algorithms typically used in AI systems (such as neural networks) are considered “black boxes.” This means that we are providing the algorithm with training data and asking it to learn to recognize some specific patterns from this data. We can parametrize the algorithms and change the input data by adding new characteristics. However, we do not have direct insight into the reason why the algorithms classified a particular observation in a given way.

As a result, we may have a very accurate model that produces unexpected results. In a well-known example, researchers trained an algorithm to distinguish between dogs and wolves. As it occurs, the algorithm tended to make the decision based on the background of the picture rather than on the animal silhouette or fur color. In the training data, all the wolves had trees/forest in the background.

We can use explainability techniques to understand what the black-box models base their decisions on. When it comes to health care, bad classification may result in health- and even life-threatening situations. Therefore, it is very important to understand and verify the aspects of the data that the algorithms use.

Sometimes the use of less-complex models might provide better explainability. Linear regression or decision trees might provide enough accuracy and provide good visibility into what variables and factors are key for the model. When we use a more complex model, we should use tools that support the explanations. In both cases, subject matter experts should then verify the explanation by looking for potential errors and doubts in the choice of variables and characteristics that the model uses.

AI at Amsterdam UMC

AI algorithms in health care can support diagnosis and augment doctors in treatment. One example is the cooperation of SAS and Amsterdam University Medical Center (UMC) to evaluate computerized tomography (CT) scans. Computer vision deep learning algorithms evaluate the scans to increase the speed and accuracy of chemotherapy response assessments. As a result, algorithms evaluate total tumor volume (compared to the typically two-dimensional measurement performed by radiologists). This helps doctors to determine more accurately which treatment strategy to choose.

But what is very important and noted by Dr. Geert Kazemier from Amsterdam UMC, AI technology must be transparent and open if it’s going to revolutionize health care. “If you create algorithms to help doctors make decisions, it should be explainable what that algorithm is actually doing,” he says. “Imagine if an algorithm came up with something bad for the patient and the doctor follows it. What’s the effect of that? To err is not only human.”

Want to learn more about how to mitigate the risks of potential AI misuse? We will discuss the risk of human bias, the explainability of predictions, decisions made with machine learning algorithms, and the importance of monitoring the fairness and transparency of AI applications in an upcoming webinar on April 13. Register here!

Share

About Author

Piotr Kramek

For the last 10 years, Piotr has been supporting companies in getting valuable insights from data and making data-driven decisions. His experience comes from applying advanced analytics across multiple industries in areas spanning from supply chain optimization and fraud detection to business intelligence and machine learning in health care. Piotr is a passionate technology enthusiast both professionally, when applying data science to tackle challenging problems, and in private life in areas such as home automation and 3d printing.

Leave A Reply

Back to Top