A woman arrives at the emergency room with chest pain. She immediately receives an x-ray. While the radiologist looks at the image, her AI assistant flags anomalies in the patient’s lungs – invisible without the technology.
The chest pain turns out to be benign, but sophisticated imaging reveals early-stage lung cancer. The tumor is caught early. The patient survives.
Although this is a theoretical example, it will soon be a reality. Diagnostic imaging is one of the most advanced fields in AI, with more than 100 AI-enabled radiology products approved by the FDA. Most commonly, it is used to triage or flag suspicious regions in a scan. While this is groundbreaking, given the labor-intensive work involved in imaging, the use of computer vision for diagnostics has the potential to change cancer outcomes fundamentally through early detection.
However, an enormous amount of robust, standardized longitudinal patient data is needed to transform imaging. Where can copious amounts of data be found? In trials and clinical data. Life science companies will need to collaborate to revolutionize diagnosis and treatment.
Radiomics holds great promise but is elusive clinically
Recently, Dr. Greg Goldmacher shared AI’s ability to automate advanced image measurements and improve insight into disease biology to support clinical developments with Alex Maiersperger on The Health Pulse Podcast. Goldmacher, Associate Vice President for Clinical Research and Head of Clinical Imaging and Pathology at Merck, advocates radiomics and believes it has tremendous potential to advance clinical practice.
Radiomics involves extracting features of tissues and lesions from images like volume or texture. When correlated with patient and outcome data, it would help clinical decision making – from diagnoses to ending treatment early if a drug isn’t working. The good news is that the technology is ready to apply, in theory.
For example, the Amsterdam University Medical Center trained a SAS® Viya® to quickly identify tumor characteristics and share vital information with doctors to accelerate diagnoses and help determine the best treatment strategies.
Specifically, the model uses CT scans to find the KRAS mutation status of liver metastases in colorectal cancer patients. This innovative, non-invasive approach is significant because today, the only way to identify genetic mutations in liver metastases is with invasive biopsies. The research was published in the National Library of Medicine.
Clinical medicine is closer to revolutionizing cancer care and drug development with non-invasive scans for profiling. Early decisions profoundly impact patient prognosis and well-being, and clinical trials can happen more quickly and efficiently.
Obstacles to overcome
Moving from theory to practice inherently has challenges. In clinical medicine, the problem is the lack of standardized data.
Goldmacher points out that clinical trials have used image-based endpoints in oncology for decades. AI computer vision could theoretically analyze vast quantities of longitudinal patient data and implement pattern recognition at scale. But, with data comes ethical and technical issues.
For instance, patients participating in trials consented to specific uses of their data, which may not include research. Goldmacher also notes that if the trials are analyzed again, false signals may occur. And can reliability and fairness be ensured?
Another hurdle is the need for more standardization in data and methods. In the liver metastases example, the model faltered at external validation, failing to predict mutations in a more comprehensive data set. This was at least partly because the images from the study were acquired using scanner settings that were different from those used in the external cohort. Researchers are actively working on ways to make AI systems in radiology robust enough to overcome technical challenges of this type.
Inconsistent definitions and reference values across data sets make it extremely difficult to develop trustworthy models, making them useless in clinical practice.
Collaboration and data sharing are essential
Federated learning, a type of collaborative learning, is a way to develop and validate AI models from diverse data sources while mitigating the risk of compromising data privacy.
Goldmacher discusses data sharing agreements, the use of trusted third parties, or federated learning models, where only insights – not data – are shared. In its early research stage, the MELLODDY consortium, ten pharmaceutical companies joined forces on a Kubernetes-based platform to explore what federated learning could offer in drug discovery. Clinical initiatives include the IMI PIONEER project, which aimed to standardize and integrate data on prostate cancer treatment into a single platform.
With the benefit of sharing, there are also concerns to consider:
- Federated learning can be resource-intensive.
- Standardization is necessary.
- Generative AI and synthetic data offer novel approaches.
Now’s the time for life sciences organizations to find synergies, bring clinical development and data science teams together and move AI projects from idea to execution.