2 Clarke Drive
Suite 100
Cranbury, NJ 08512
© 2025 MJH Life Sciences™ and OncLive - Clinical Oncology News, Cancer Expert Insights. All rights reserved.
Experts discuss how AI can diagnose cancer from standard H&E slides while accounting for slide quality differences to support broader clinical use.
In this episode of OncChats: The Future of Pathology With AI, Toufic Kachaamy, MD, of City of Hope; Madappa Kundranda, MD, PhD, of Banner MD Anderson Cancer Center; and Kun-Hsing Yu, MD, PhD, of Harvard Medical School, discuss how artificial intelligence (AI) can diagnose cancer from standard hematoxylin and eosin (H&E) slides while accounting for slide quality differences to support broader clinical use.
Toufic Kachaamy, MD: Can you tell us, on the practical side, how this looks in everyday life? Is there a specific staining that needs to happen? Is this for any staining? How quickly [does it] happen? [Do] the pathologists have to assign this and review? Just explain to us: How does this work in real life?
Kun-Hsing Yu, MD, PhD: In our current research, we are primarily working with the standard hematoxylin and eosin staining slides because those are the most common and standard for diagnosing most types of cancers. We've been fortunate to have lots of collaboration with many hospitals worldwide to collect diverse and large data sets of the standard H&E pathology. We show that using AI and, more specifically, computer vision approaches involving vision transformers, we are able to train a model that will be able to differentiate and diagnose different types of cancers using standard H&E. You also make a good point that there are a few special stains that clinicians and pathologists use to further identify the molecular subtypes, and we further show that our method is applicable to those extensions, as well [as] to analyzing [immunohistochemistry and] many other special stains specific for individual cancer types.
Madappa Kundranda, MD, PhD: Dr Yu, I think this is fantastic. What you're even alluding to is truly a pipe dream. Because when we look at cancer, 70% [or more of cases are] actually managed in the community. It's less than 25% to 30% [of cases that are] managed in tertiary care centers. I think this is going to be the key context [to this]. Going back to Dr Kachaamy's question, [which was related to] the workflow. [Regarding] the H&E stains, unfortunately, as a grad student, I've done my piece of that. Luckily, I don't need to do that anymore. But I think the practical part of it is also the quality of the specimens in the community, based on the expertise [and] the technology that they have.
I'm a gastrointestinal medical oncologist, so I'm just going to keep it simple. Even when I get a fine-needle aspiration or a fine-needle biopsy of the pancreas from Dr Kachaamy, the quality is very different from another advanced endoscopist in the community. It might be because of expertise, but it could be because of the equipment and the needles that they're using. So, my question is: When we're trying to translate this to the real world, how is the quality of the specimen acquisition for these H&E slides a part of this AI algorithm? Because you're going to have a spectrum. What are your thoughts on that?
Kun-Hsing Yu, MD, PhD: That's a great point. We acknowledge that for many different slides [that] we obtain from different sources, they do have variable qualities, and that presents a big hurdle, even for expert human evaluators. Occasionally, our expert pathologist collaborators will be able to immediately tell whether a particular slide is from their hospital or from outside hospitals. Our approach to address this using this AI method is that we have image preprocessing modules that first detect the slide quality, and we have image augmentation approaches to ensure that in our training data set, we have augmented the data such that you are better able to capture a wide diversity and variety of possible tissue quality.
With this approach, we are able to extend and improve the generalizability of our model to further enhance its applicability into the clinical workflow. Both of you have identified this very important current hurdle, which is: How do we translate this important clinical research question into actual clinical practice? In my opinion, there are still a few ongoing challenges that we need to address as a team involving clinicians, AI developers, researchers, pathologists, and many others to further identify the right workflow to embed the current state-of-the-art AI system into day-to-day clinical practice.
Madappa Kundranda, MD, PhD: Thank you so much. I think that is such an important part of it, because just going to rudimentary pathology, there were some next-generation sequencing companies that used microdissection; this [was] almost a decade ago. That kind of made the [go-to] company only because, within pancreatic cancer [and] with microdissection, I was able to get rid of the stroma as opposed to actual cancer cells. That made a huge difference way back then.
Now, when you are referring to imaging augmentation and trying to get that purity aspect of it—as it pertains to diagnosis, prognosis, all of that—within the AI models, I think that is what is fascinating. That's what's exciting. That truly is the way of the future, because we cannot have our patients undergo multiple repeated biopsies. We need to look at it on the tail end to try to figure out how best we can actually improve what we have in terms of our technology, with the specimens, [and] with the slides that we have. We all know [that] even the H&E slides can vary from institution to institution based on the way it is stained.
Toufic Kachaamy, MD: I completely agree.
Related Content: