2 Clarke Drive
Suite 100
Cranbury, NJ 08512
© 2025 MJH Life Sciences™ and OncLive - Clinical Oncology News, Cancer Expert Insights. All rights reserved.
In this episode of OncChats: The Future of Pathology With AI, Toufic Kachaamy, MD, of City of Hope; Madappa Kundranda, MD, PhD, of Banner MD Anderson Cancer Center; and Kun-Hsing Yu, MD, PhD, of Harvard Medical School, discuss how artificial intelligence (AI) analyzing pathology slides and clinical data can predict cancer prognosis and treatment response better than traditional models, with potential for clinical integration and AI tumor boards, though challenges remain.
Toufic Kachaamy, MD: I noticed in some of your research that you study prognosis. How are you using AI in pathology to predict prognosis? I really find that fascinating. Tell me more about that.
Kun-Hsing Yu, MD, PhD: This is also one of the unexpected findings from our paper, in that we first built a standard diagnostic model out of hematoxylin and eosin [H&E] slides, and we further challenge the capability of such a large foundation model in predicting patient prognosis under standard treatment. Our approach is, in our study design, we first stratify the patients by their cancer types and cancer stages, making sure our patient subcohort receives more homogeneous type of treatment. Then we challenge our machine to make immediate predictions using the data we collected at the time of diagnosis—just a single H&E slide plus some basic demographics like patient age, patient biological sex, and many other factors. And we built a multimodality AI model that is incorporating different data modalities, different data types, all together into a clinical prediction.
We show that our model can reliably identify patients with high mortality risk [vs those] those with lower mortality risk, and we use the log-rank test to show the statistically significant difference between these 2 groups. The implication of this particular approach would be that in the future, when considering the standard treatment, perhaps we can first consult this AI model to make sure our patient will likely respond to the standard set of treatment; if not, we may consider more intensive clinical follow-up or...a more aggressive form of combination therapy.
Toufic Kachaamy, MD: I want to put this into perspective, especially as it relates to gastrointestinal malignancies. If we [consider] pancreatic cancer, [for example], a minority of patients will survive 5 years with standard treatment. But we expose a lot of patients to treatment, and we don't know how to predict these [cases]. Do you see AI, in the future, telling us, "This 10% is going to respond phenomenally to this treatment?" Is that correct? Because that would be phenomenal.
Kun-Hsing Yu, MD, PhD: Yeah, totally. One of the potential uses of our approach [could be] to identify those exceptional responders, those who would respond to a particular treatment of interest. We can use this to personalize treatment decisions for individual patients.
Madappa Kundranda, MD, PhD: This kind of reminds me [that] every time you look at the bell-shaped curve, you look at 2 things as scientists. We look at the front end and the tail end; the ones who don't respond at all, and the ones who have amazing responses—as you mentioned, the ones who have a survival of 10 to 15 years. So far, we've been very rudimentary in terms of being able to predict what causes that. We have worked this up, ad nauseam.
I think the biggest advantage with AI...and please correct me [if I'm wrong], Dr Yu, is the fact that we have these large databases that also have these adaptive learning modules, and as we get more information, as we clean up the data, it basically gets better at predicting. That is intriguing for me, but it also leads to the next question, which is: How does this foundation model compare to the traditional models we have?
Kun-Hsing Yu, MD, PhD: As we know, the foundation model is essentially a large model trained with a diverse and large amount of data. For example, ChatGPT is a foundation model with a large corpus of text information from Wikipedia and other available resources from the internet. Once this model is built, its capabilities are no longer restricted to answering some questions. We can also retrieve from Wikipedia. We can even use ChatGPT to plan for our next trip or to experiment around some creative recipes. Our idea here is very similar. In the past, standard pathology AI approaches, including many of those that we developed around 10 years ago, largely focused on a particular use case. For example, for colorectal cancer diagnosis, researchers just trained and optimized the particular model that is specialized for that disease distinction.
Our hypothesis here is that we can use a large and diverse amount of data and allow the machine to explore some of the data similarities and differences all on its own in order to learn the basic pathology knowledge about the general manifestations of different disease types. Building up on this hypothesis, we built this foundation model, which we call CHIEF—the Clinical Histopathology Imaging Evaluation Foundation model. In this CHIEF model, essentially, we first use a self-supervised approach to learn the similarities and differences among more than 15 median image patches and an additional 60,000 whole slide pathology images, and we learn their general pathology manifestation distributions across 19 different cancer types.
After this model is trained, we further tailor this model to differentiate different molecular variations, different survival outcomes specific to each disease entity. We show that this new approach, which leverages the overall data distribution, then, from a large and diverse amount of data, combined with weekly supervised machine learning that is tailored for specific tasks, worked much better than many of our early works in cancer-specific and use case–specific applications. This indicates that these AI approaches are very flexible. It can learn from a vast amount of data and capture some of the subtle details that may evade human eyes. With this capability, we can train this machine learning model to do something that is beyond human capability.
Madappa Kundranda, MD, PhD: You're spot-on, Dr Yu. I was in Mexico City last week, and for my plan of what I needed to do and where I needed to visit, I used ChatGPT. It gave me an amazing itinerary, and it was phenomenal. But besides that, I was very excited to look at your most recent publication and review that. I think this is where the road really meets the thing in the context of where we are going to be able to get all of this into the community.
We've looked at molecular tumor boards. To a certain degree, we're going to get to AI tumor boards, so to speak, and that is an exciting part of it. When we spoke about molecular tumor boards about 15 years ago, people were just laughing at us and saying, "No, this isn't even going to make sense." Now, everyone knows that even in the smallest community setting, there is some kind of molecular tumor board. But this AI tumor board, so to speak, when do you think we would get to that point?
Kun-Hsing Yu, MD, PhD: Depending on our perspective, currently, there are actually a few FDA-approved pathology AI applications already out there, but those are largely application driven and application limited. For widespread, large-scale adoption, perhaps we still have some roadblocks that we need to overcome. How do we incorporate this new AI model into the current clinical workflow without disrupting our usual pathology and clinical evaluation? And how do we best utilize the information we extracted from this AI model, such that we wouldn't be misled by some imperfect predictions from these models? After overcoming this hurdle, I'm sure this AI technique could make a huge impact in the scenario of cancer diagnosis and the treatment selection and suggestions for clinicians.
Related Content: