falsefalse

AI-Powered Scout Platform Could Enhance Oncology Decision-Making With Data- and Expert-Based Insights

Joshua Feinberg, MD, discusses the role of the Scout AI platform in aiding trial matching and treatment selection in oncology.

 Joshua Feinberg, MD

Joshua Feinberg, MD

The integration of artificial intelligence (AI)–powered clinical tools is poised to transform oncology practice, and Scout, a customized large-language model (LLM), AI-powered and expert-trained search tool developed by OncLive®, could provide a streamlined method to aid in treatment decision-making and identifying clinical trial opportunities for patients with cancer, according to Joshua Feinberg, MD.

At the 42nd Annual Miami Breast Cancer Conference, Feinberg presented on the current role of AI in oncology practice. He also tested the Scout platform ahead of its launch and detailed his main takeaways in an interview with OncLive.

Scout could definitely have a use in clinical practice [in] reducing the time it takes to look up things such as treatment guidelines, eligible clinical trials for our patients, and the most up-to-date medications or treatment options for our patients,” Feinberg said.

In the interview, Feinberg, a surgical oncologist with Maimonides Medical Center in New York, New York, highlighted the utility of AI in refining clinical workflows, optimizing treatment selection, and ensuring oncologists remain current with the latest practice-changing data; discussed his test run on Scout; and detailed how this tool could be applied in clinical practice.

OncLive: As cancer treatment decision-making becomes increasingly complex, how could AI tools aid clinicians in parsing through the vast amounts of information to make proper decisions for their patients?

Feinberg: As our treatment armamentarium [continues to] grow and more medications are being released to help our patients, it [becomes] difficult to keep track of all the most up-to-date and current data. There can be drugs that are approved by the FDA that [when simply] going to conferences, [clinicians] may not be aware of [them].

Having AI [provide] up-to-date information about those medications allows a provider to do a quick search and in a very seamless way, look at all the available treatments for that patient. Also, National Comprehensive Cancer Network [NCCN] Guidelines are constantly evolving at a very fast pace, and it's quite difficult to keep up to date [solely] by reading, attending conferences, and everything we do to [educate ourselves as] academic or community-based [providers]. Having a resource like Scout that was [introduced] at the [Miami Breast Cancer Conference] or any AI platform is helpful to stay up to date on these constantly evolving guidelines.

What was your initial impression of OncLive’s Scout platform? What specific features could be most valuable in your clinical practice?

I thought it was incredible how easy it was to use and how fast the results came back. The output from it was extremely fast, and from my understanding, there was even a decrease in the [time needed] and an increase in efficiency comparing [its use from] the beginning to the end of the conference. In other words, the Scout platform was learning information and could provide more accurate information in a faster fashion, even in the time span of the Miami Breast Cancer Conference itself. That was [the first component] I was impressed with.

Were there any specific prompts you tested while using Scout?

The accuracy of the information—at least the questions I was asking in terms of current treatments—[was impressive]. For instance, [when asking about] triple-negative breast cancer [TNBC] and the definitions of different disease pathologies, it was accurate and reliable.

I typed in simple [prompts], just looking at the standard of care for [patients with] TNBC, and it gave a full description of the [phase 3] KEYTONE-522 trial [NCT03036488] protocol, the agents that are involved in that [regimen], and the treatment duration, which is quite impressive, and [this was presented] in an easy-to-view layout.

I also [used Scout] to look at different risk factors for the development of breast cancer, [such as] defining atypical ductal hyperplasia and the level of risk that this places patients, at, and there was some information with guidelines I was [testing], as well.

How do you think Scout could aid in routine tasks, such as searching for trial eligibility criteria or consolidating information?

Clinical trials are opening every day, and it's difficult to keep track of which trials are applicable to your patient population. Therefore, having a resource to immediately look up the most up-to-date trials and see the inclusion criteria and exclusion criteria to facilitate the process of enrolling patients onto trials is a great use [of Scout].

And what's unique, as my understanding goes with Scout, is that the input from it is all very relevant information. The main thing about AI is that its main success is driven by the information that we feed it. The internet is a useful resource, but it has a lot of non-relevant, extraneous information that may or may not be accurate, whereas what we're discussing at an academic conference with peer-reviewed articles and expert opinions is going to be reliable, accurate information. That's the input going into Scout, and that is a testament to its validity and its uniqueness compared with other [AI platforms] and chatbots.

What are some of the limitations regarding the use of AI in clinical practice?

As we know, AI is all around us, both in our personal space and in our professional space. Breast cancer [treatment] is definitely a part of that.

Although AI is an amazing resource and extremely useful, we will have to be a little bit cautious in the way that we use it in our practice. My [presentation at Miami Breast] was more of a cautionary tale [showing that] doctors right now have to be careful about what we’re relying on in terms of the information we’re using to educate patients. While [AI] is an incredible resource and [already has] a place in our care, there are improvements to come.

I want clinicians to be a part of that [improvement] process [because] as AI grows and its use expands, physicians, clinicians, and providers are going to be the ones using it and figuring out the best ways to adapt and incorporate it into our practice. Perhaps more importantly, we’re also the ones who are going to be regulating it, so we have to make sure that [AI] is being used appropriately and that the information [output] is reliable and accurate.

In what ways have you studied the clinical use of LLM AI models?

[We conducted a study] that looked at the ability of large-language model AI platforms, [in this case], chatGPT and Google Gemini, which are the 2 most common large-language models. [We] compared their performance to [that of] breast oncology fellows on the Breast Education Self-Assessment Program, a multiple-choice exam.

Fellows take this at the beginning and end of their fellowship year, and we took the questions and fed them to chatGPT and Google Gemini and compared their performance to a passing score for a breast oncology fellow.

The results of this study will be presented at the upcoming American Society of Breast Surgeons Annual Meeting; however [the results indicated that] the chatGPT and Gemini platforms did well but were not perfect.

We broke it down by the types of questions, and we found that patient questions with images and questions pertaining to pathology were the types of questions where [those AI models] had the most difficulty.

As I was saying before, it's a bit of a cautionary tale. These platforms are extremely useful, but they're not perfect, and we have [more work to do] in figuring out how to use AI and rely on the information [it produces].

Reference

Feinberg J. Limitations of “Dr. Google.” Presented at: 42nd Annual Miami Breast Cancer Conference; March 6-9, 2025; Miami Beach, FL.


x