2 Clarke Drive
Suite 100
Cranbury, NJ 08512
© 2024 MJH Life Sciences™ and OncLive - Clinical Oncology News, Cancer Expert Insights. All rights reserved.
Much has been written regarding the essential role of clinical trials in the major advances in cancer therapy and observed improvement in disease-related morbidity and mortality.
Much has been written regarding the essential role of clinical trials in the major advances in cancer therapy and observed improvement in disease-related morbidity and mortality. The initiation of randomized trials comparing an existing standard of care with a proposed superior strategy has been a vital component in separating hypothesis and hype from objective and reasonably unbiased data.
However, it must be recognized that, to achieve this goal, it is necessary to pay attention to specific and essential compo-nents of the clinical trial process. These include, in the case of randomized studies, the selection of an appropriate control arm and trial sample size. Additionally, investigators should ensure treatment groups are well matched for recognized rele-vant prognostic factors (eg, age, tumor stage, prior therapy, and relevant comorbidities).
Unfortunately, despite great care in the development and conduct of trials in the oncology arena, one additional factor, in the opinion of this commentator, has not received adequate attention. This is the specific investigator (and/or sponsor) interpretation of the actual study results.
Concerns to be noted here may be relatively minor, such as the common use of quite subjective commentary to describe study findings. Examples include terms such as meaning-ful clinical activity with manageable safety, which may fall into this category.1 What is meaningful and manageable to one clinician or patient may not be viewed in this manner by others, especially when the to use such ill-defined and possibly questionable terms in a manuscript submitted for peer review may not be that of the study investigators, but rather come directly from the sponsor or possibly a third-party communications group hired by a pharmaceutical company to provide writing/ editorial assistance.
What is so difficult about simply concluding that “X% of patients in this study achieved a RECIST-defined response with a median duration of XX months (range, XX-XX),” that “XX% of patients experienced grade 3 or higher adverse effects (AEs), the most common of which were XXXX and XXXX,” or that “X% of patients required hospitalization for treatment-related AEs,” and letting readers decide for them-selves whether the degree of clinical activity was meaningful and toxicity was manageable?
One may feel greater concern for the interpretation of certain objective data as positive outcomes. For example, in findings published in a recent report, a 7.9% response rate (n = 3 of 38) among patients with cervical cancer whose cancers did not express PD-L1 was suggested to provide a differential between the experimental checkpoint inhibitor being studied and another already on the market.1 Further examination of a larger patient population may confirm the utility of the agent in this clini-cal setting, or the duration of these uncommon responses may be particularly prolonged. However, this commentator feels compelled to ask whether we are now at a point in oncology where an objective response rate of less than 1 in 10 patients following treatment—with what will surely be an expensive anti-neoplastic drug—should be viewed as a positive finding.
A far more serious concern must be addressed regarding published data that appear to suffer from deficiencies of both scientific and, most disturbingly, basic ethical considerations. Unfortunately, recently reported data from a phase 3 trial exploring a novel anticancer agent in platinum-resistant ovarian cancer provide a disquieting example of this phenomenon.2 What is highlighted for discussion here is not the study design or the experimental agent, but rather the selection of a study control arm—in this instance, the investigator’s choice of single-agent pegylated liposomal doxorubicin (PLD) or topotecan.
On a scientific basis, the rationale for this decision is quite unclear. It has been unequivocally demonstrated that the combination of single-agent PLD, topotecan, or weekly paclitaxel with bevacizumab (Avastin) significantly improves progression-free survival in platinum-resistant ovarian cancer, leading to FDA approval of this drug strategy.3 The trial highlighted here failed to reveal a statistically significant benefit for the experimental agent compared with the control regimen. However, this raises the question: How would the study results have been interpreted by the study investigators, various regulatory agencies/third-party payers, and, most important, treating oncologists and their patients if the data had revealed statistical superiority of this novel drug vs what is known today to be an inferior treatment approach in this clinical setting?
We now come to the final point; the ethical concern with the control arm in this trial, which called for PLD to be initially administered at a dose of 50 mg/m2. Although initially FDA approved years ago for delivery at this dose level when employed as a single agent in platinum-resistant ovarian cancer, there are extensive data, including results of phase 3 randomized trials in this clinical setting, that reveal the essential therapeutic equivalence of a dose of 40 mg/m2 and substantially reduced well-recognized symptomatic toxicity.4 Further, survey data reveal the drug is rarely administered in platinum-resistant ovarian cancer (4% of patients) in nonresearch, standard-of-care practice.4
Therefore, the question to be asked of the study investigators and sponsor is: What was the justification for employing a PLD dose of 50 mg/m2 when this potentially more toxic dose would infrequently be delivered to patients who were not participants in this research? Further, one must ask whether the institutional review boards responsible for ethical oversight of this study were aware of the status of current standard-of-care dosing of PLD, and if not, why not? Did the consent form for this study clearly state that the control arm dose of PLD was higher with the potential for more serious AEs than what the patient would likely receive if they did not participate in this study? Again, if not, why not?
Although it is not possible to know the thoughts of the peer reviewers and journal editors of this manuscript, it is reasonable to inquire whether they were aware of the concerns highlighted in this commentary.
Related Content: