2 Clarke Drive
Suite 100
Cranbury, NJ 08512
© 2024 MJH Life Sciences™ and OncLive - Clinical Oncology News, Cancer Expert Insights. All rights reserved.
CMS has put so much emphasis on reported data that it is now paramount for oncology practices to understand how data should be sifted and interpreted to arrive at meaningful conclusions.
Bobby Green, MD
Good data and data analysis can tell doctors a lot about their performance that they may have overlooked, said Bobby Green, MD, senior vice president for clinical oncology at Flatiron Health in New York City. Without careful analysis, doctors may assume they’re doing the right thing even though the truth may be different. “I think I’m a good oncologist,” he said. “I know my patients think I’m a good oncologist. My referring docs think I’m a good oncologist, but the reality is I don’t really have any idea whether I’m a good oncologist.” The difference is being able to prove it by the numbers, he said.
CMS has put so much emphasis on reported data that it is now paramount for oncology practices to understand how data should be sifted and interpreted to arrive at meaningful conclusions, said Green, who in addition to his work for Flatiron, continues to work as a practicing oncologist 1 day a week. He spoke in April at the 2017 Community Oncology Alliance annual meeting.
At Flatiron, Green is working on ways to mine healthcare information for clues to better outcomes and lower costs. Through his clinical work, he is participating in the Oncology Care Model (OCM) pilot for better outcomes and lower costs. “We’re being asked to look at several different things,” he said. “We’re being asked to measure the quality and cost within our practices. We’re being asked to make interventions in our practices that have the potential to affect care and the potential to affect the financial viability of our practices.”
In his presentation, Green described numerous situations where cost savings and quality initiatives based on Big Data can go awry without careful selection and handling of data points. The Medicare Access and CHIP Reauthorization Act (MACRA), the Medicare reform law, has been CMS’ tool to spur medical practices to gather data and report on a number of performance measures, and the analyses based on this data will determine how well practices are remunerated. Therefore, oncology practice managers and physicians must understand how to correctly interpret the data they’re gathering and that are coming to them from CMS, Green said. He noted that under the CMS plan, by 2020, 60% of the physician payment formula will be based on performance as measured from reported data.
Good measurement begins with a sound analysis plan and continues with a willingness to look deeper than surface impressions, he said. “The first answer you get is not always the right answer.” For an example, he reviewed a Flatiron study of EGFR and ALK testing in cases of squamous non—small cell lung cancer (NSCLC). Ideally, about 20% of patients with squamous NSCLC would be tested for EGFR and ALK, particularly, those with a lifetime history of smoking or for whom inadequate tissue samples were obtained. The Flatiron study of 4095 patients and 206 clinics showed an average testing rate of 21%, which Green said was “spot on.” But on closer inspection, it was discovered that some of the clinics had been testing far fewer than 10% of patients with squamous NSCLC, whereas others had been testing close to 100%, both of which were clearly undesirable extremes, Green pointed out.
This example shows the importance of looking at data more closely to avoid being misled by superficial observations, he said. “If you drill down, you actually see some discrepancies, and I would argue that these are discrepancies that if you’re a practice that’s either testing too many or not testing enough patients, this is an improvement that you would want to make.”
Conversely, an initial examination of data can tell you you’re doing the wrong thing and only upon closer inspection are you able to determine that things are being done correctly. To demonstrate, Green presented a study of KRAS testing in patients with metastatic colorectal cancer (mCRC) who were potential candidates for EGFR inhibitor therapy. He noted that KRAS testing is advised for mCRC because it’s harmful to give EGFR inhibitor therapy to patients who are KRAS-positive. An initial look at the data showed KRAS testing rates peaking at 71% of patients (n = 168) in 2012 and dropping to 57% (n = 748) in 2014, neither of which was acceptable, Green said.
Further analysis showed one clinic testing as many as 90% of patients and another testing just 35% of patients. There was wide variation among the remaining clinics, too. That wasn’t good either. But then Flatiron looked at testing by line of therapy and found that the overall testing rate reached 90% of patients as they progressed through multiple lines of therapy, which showed that, in general, KRAS testing rates were acceptable. The reason for the discrepancy was that not all doctors were testing patients at the same time. “If people weren’t going to get an EGFR inhibitor in the first line, some practices didn’t test until they were closer to being ready to do that,” Green said. “The answer was that there was variation in the timing. That’s OK. Everyone’s doing the right thing. Practices are performing as you would expect.”
One important aspect of that finding is that in an environment where doctors are paid according to CMS’ view of whether they are delivering care appropriately, it is essential for CMS to look at the right data and evaluate performance based on correct assumptions about the meaning of that data, Green said. “Imagine if this [KRAS-testing example] were part of a [CMS] quality metric and it affected payment reimbursement and shared savings. That’s all possible in the new world we’re living in, but when you dig into the data you actually find a bit of a surprise—that people are not doing a bad job,” Green said.
A huge amount of data is being produced as a result of the transition to electronic health records (EHRs), but so far it hasn’t been possible to extract as much useful information and insight from that data as doctors and researchers would have liked, Green said. Nevertheless, the field of data mining is moving forward and the possibilities are exciting, he added. As part of the process, it’s important to make meaningful comparisons of data and to not be forced to collect and relay data that isn’t going to contribute to meaningful analysis, Green said. “We have quality metrics that we need to report on, and I fundamentally believe that the metrics I’m reporting on don’t do a lot to really deliver higher value care for my patients. I don’t believe that checking a box that says I’ve assessed my patient for pain is going to address their pain better simply because I’ve checked that box.” For that reason, practices need to be vigilant that CMS doesn’t saddle them with quality reporting requirements that lengthen the workday but don’t add value to the clinical process.
Beyond measurement, what practices need from CMS is feedback that amounts to actionable information, Green said. The volume of Medicare claims data that has been given to OCM practices has been both beneficial and overwhelming. This information can provide valuable insights such as cost by disease, cost drivers, drug adherence, hospitalizations and emergency department use, and patient comorbidities. The limitations of this data include not being able to tell which physicians are responsible for costs, claims data being too old and difficult to act upon, and the inability to know the stage or extent of a patient’s disease. “There’s lots of clinical variation that has a bearing on cost that’s actually hard to get out of claims data,” Green said.
He noted that when clinical data is combined with claims data, it becomes possible to get a much fuller and more useful picture of a patient’s journey through the healthcare system (Figure). However, it is difficult to arrive at such a picture, and Green said it was only with the help of statisticians and data analysts at Flatiron that he was able to fill in many of these blanks. “It’s important at the practice level to make sure that whoever is doing this is doing it right. I think it’s still an evolving field how we should appropriately do some of these analyses,” he said.
Combining data to develop this fuller picture of a patient’s care path requires the use of organized data, defined as being structured and accessible by machine-based analysis, and random data, known as unstructured, which may take the form of notations in the EHR. For example, without both forms of medical data, it may be impossible to distinguish whether high costs of care are being generated by a particular physician’s actions or whether the patient’s disease is an outlier that follows an inconsistent path of development, Green said.
“There are always going to be inaccuracies, there are always going to be limitations, and that ultimately is going to introduce bias into your actions,” he said. “We need to be careful we don’t make interventions in areas where we’re not sure about what we’re doing or about what we’re trying to change.”
It is also important to make clear distinctions between patient cohorts for the same disease state. Because patients differ by age and response to care, they must be separated into appropriate cohorts for the purposes of physician performance measurement and reward, Green concluded. “If we’re going to be taking risk as oncologists—financial risk for our patients—we need to understand the different cost of the different subgroups of patients.”
Related Content: