Background

Randomized controlled trials (RCTs) are considered the “gold standard” for assessing the clinical efficacy of interventions. However, the high cost and limited efficiency associated with classical RCTs [1, 2] have exposed the need for more efficient designs. Adaptive design, characterized by its flexibility and efficiency, allows for timely decision making based on accumulating trial data [3, 4], such as stop** trials early [5], allocating more participants to better groups [6], or drop** inefficient arms [7]. The advantages of adaptive design in reducing research time [8, 9], saving sample size [10, 11], and improving success rates [12, 13] have prompted many researchers to incorporate it into the new drug development process.

Reviews [14,15,16,17] have specifically focused on the application and reporting of adaptive trials. One review [14] including adaptive trials other than phase I and seamless phase I/II trials, found that seamless phase II/III trials were the most frequently used type, and that many researchers had failed to adequately report dependent monitoring committees (DMCs) and blinded interim analyses. Another literature survey [17], including phase II, phase III, and phase II/III adaptive trials in oncology, found that adaptive design was commonly applied in phase III trials and that the reporting of adaptive design-related methods was inadequate. A review [15] summarizing features of 60 adaptive trials with specific methodology types showed that the statistical method descriptions were poor. A systematic review [16] assessing the reporting compliance of group sequential RCTs by Consolidated Standards of Reporting Trials (CONSORT) 2010 checklist revealed a lack of accessibility to protocols for details. However, these studies had important limitations for addressing the application and reporting of adaptive trials. First, the included adaptive trials were restricted to specific clinical phases and areas of disease. Second, the studies were focused on identifying deficiencies on specific aspects of interest (e.g., statistical methods). Third, none of the studies focused on drug trials. Thus, the findings of those studies were not comprehensive and may not be generalizable to other adaptive design types.

The Adaptive designs CONSORT Extension (ACE) statement, a reporting guidance for adaptive trials, was developed in 2020 to advise clinical researchers on how to report details of the adaptive design [18]; this statement is also considered a valid tool to evaluate the reporting quality of adaptive trials. Our study aimed to retrieve adaptive drug RCTs in all phases and disease areas to systematically investigate the overall application of adaptive design to drug RCTs, comprehensively identify gaps in reporting, and investigate the extent to which adaptive design information was reported before the publication of the ACE checklist, to provide evidence leading to directional improvements and advocacy for adequate reporting in the future.

Materials and methods

Eligibility criteria

We selected studies according to the following criteria: (1) RCTs explicitly stating to be adaptive trials or applying any type of adaptive design; (2) RCTs assessing efficacy or safety of drugs; and, (3) RCTs published in English journals. We excluded: (1) re-published studies; (2) protocols, abstracts, or re-analyses of adaptive trials; and, (3) incomplete trials.

Search strategy and screening

We searched EMBASE, MEDLINE, Cochrane Central Register of Controlled Trials (CENTRAL), and ClinicalTrials.gov databases from inception to January 2020. We used both subject headings and free-text terms related to adaptive clinical trials to identify relevant studies (See Appendix 1 for the search strategy).

Data extraction

We generated a data extraction table to record the following information: first author, publication year, journal (quantile 1 defined by Journal Citation Reports [JCR], others), reasons for utilizing adaptive designs, trial center (multicenter, single-center), whether a trial was international or not, trial clinical phase, adaptive design type, area of disease, type of control (active, non-active, both), type of primary outcome, expected sample size, randomized sample size, and funding source (government, private for-profit, private not-for-profit, not funded, or unclear).

We extracted primary outcome according to the following strategy: (1) if a trial specified primary outcome(s), we selected it or the first one as the primary outcome; (2) if a trial did not specify primary outcomes, we selected the first one reported in the results. Further, we classified these selected primary outcomes into two types: clinical outcomes (clinically meaningful endpoints that directly measured how patients feel, functions, or survives) or surrogate endpoints (laboratory measures or physical signs intended to be substitutes for clinically meaningful endpoints) [19].

Based on the literature [12,13,14], we classified adaptive designs into 10 types: group sequential, adaptive dose-finding, adaptive randomization, sample size re-estimation, adaptive hypothesis, biomarker adaptive, seamless, pick the winner/drop the loser, adaptive treatment-switching, and multiple adaptive designs. We identified and extracted the adaptive design types as planned, regardless of whether they were implemented, which avoided the omission of any types.

Reporting quality assessment

ACE checklist, a specific CONSORT extension to adaptive trials, provided essential reporting requirements to enhance transparency and improve reporting. Hence, we assessed the reporting quality of the included studies by the ACE checklist. First, we evaluated the adaptive RCTs’ compliance for 26 topics of ACE checklist. Second, we also assessed seven essential items (new items) specific to adaptive trials in the ACE checklist, nine modified items relative to the CONSORT 2010 checklist, and six items with expanded text for adaptive design. The response to each topic/item could be “yes”, “no”, or “not applicable”, indicating compliance with ACE, non-compliance, or not applicable, respectively. Based on previous literature, we selected proportions of adherence ≤ 30% as underreporting [Suggestions for reporting of drug adaptive randomized trials

Flexibility is a significant strength of adaptive designs, but it emphasizes the need for rigorous reporting of both pre-planned and actual changes in adaptive trials. Our results indicate that reporting on drug adaptive randomized trials is frequently inadequate, especially on essential items that include the SAP accessibility, confidentiality measures, and assessments of similarity between interim stages. This inadequate reporting may lead to ambiguity regarding planned modifications and the reasoning behind actual decisions, ultimately undermining the credibility of the findings from drug adaptive design trials.

Future adaptive trials should adhere to the ACE checklist to ensure that all pertinent details get reported, particularly regarding items essential to the adaptive design. Journals should consider requiring authors to follow the ACE checklist when reporting the design, analysis, and results of adaptive trials.

Conclusion

The use of adaptive design has increased, and is primarily in early phase drug trials. Group sequential design is the most frequently applied method, followed by adaptive randomization, and adaptive dose-finding designs. However, the reporting quality of adaptive trials is suboptimal, especially in terms of essential items. Our findings suggest that clinical researchers need to provide adequate details of adaptive design and adhere strictly to the ACE checklist. Journals should consider requiring such information for adaptive trials.