Background

Clinical trials are a vital part of improving the treatment and care of patients. Due to the increasing costs of trials [1, 2] and the need to answer important research questions as rapidly and robustly as possible, new trial methods that can increase operational and statistical efficiency are of great interest. Adaptive trial designs [3] are one such class of methods; they provide pre-planned opportunities to use accumulating trial participant outcome data to make changes to the course of the trial, whilst ensuring the statistical properties of the trial remain intact and results credible. Adaptive designs (ADs) have different features that can, for example, (1) improve the statistical power of the trial; (2) reduce the time taken and the number of participants required to evaluate treatments, thus potentially saving money and other resources; and (3) reduce exposure of trial participants to insufficiently effective, or more harmful, treatments by stop** recruitment to them early [4]. ADs are typically more statistically and operationally complex than traditional trials and require high-quality interim analyses undertaken (including implementation of decisions) rapidly; they may therefore require higher levels of (and exact timing of) effort, resources and expertise to design, set-up, deliver, analyse and report. The sample size and study length of an adaptive trial are often unknown at the outset, which can further complicate their resourcing. There has been little guidance aimed at non-commercial organisations and researchers who conduct clinical trials on appropriate resourcing of adaptive trials. If adaptive trials are inadequately resourced, their advantages may be compromised, leading to increased risk of operational or statistical biases [3, 5, 6].

The costing adaptive trials project

The Costing Adaptive Trials (CAT) project investigated the additional resources, as compared to similar non-ADs, required to support effective adaptive trials. Full details of the methods and results are reported in Wilson et al. [7]. Briefly, this research was undertaken in the UK in 2020 through a mock costing exercise. Research staff in seven academic UK Clinical Research Collaboration (CRC) registered Clinical Trials Units (CTUs) provided the staff and non-staff financial costs that they estimated were required to support an adaptive version of a trial, and a non-adaptive version of the same trial. This was undertaken for five different trial scenarios covering different types of ADs based on real trials run in the UK (see Additional file 1). The level of practical experience of the designs varied across the CTUs. The mock costing exercise was followed by a qualitative research component to understand the factors influencing the estimated resource requirements and differences between the non-adaptive and adaptive trial designs, as well as between CTUs.

Results demonstrated wide variability in the staff and non-staff resources anticipated across scenarios and CTUs, dependent, for example, on availability of core infrastructure programme funding or in-house IT systems. On average, there was a modest increase (2–4%) in resources anticipated for the AD, compared to the non-AD, within each scenario. This is consistent with comparisons using alternative methods [8]. The highest percentage increase was for statistical staff, followed by data management staff. There was inconsistency in whether additional resources for trial management staff were required across CTUs.

An important objective of the CAT project was to use results from the research to develop guidance for non-commercial organisations and researchers who design, plan, coordinate and deliver clinical trials. Here, we outline a five-step approach to aid, and potentially shorten, the time-consuming planning of adequate resourcing of adaptive trials (both staff time and non-staff costs). This approach was informed from our CAT research results that focussed on CTU resources (excluding per-patient costs). We did not explore other research costs such as intervention supply, or other methodological groups that may be involved in a trial, for example, health economists or researchers using qualitative methods. Thus, these are not fully considered in this guidance.

The process is shown in Fig. 1, with each step described in further detail below. It is predominantly aimed at academic organisations and researchers running clinical trials, but also may be relevant to other organisations and funders. Specific recommendations to funders are provided in the ‘ Guidance for funders’ section. Although we focus on ADs, the process may be useful for other innovative designs such as master protocols [9] and seamless designs (e.g. phase II/III) which are often adaptive, but not always.

Fig. 1
figure 1

Outline of process for considering and justifying resources for an adaptive design

Step 1 – What tasks are required to support the trial?

We start with the assumption that there is a proposal for a clinical trial, following the PICOS (Participants, Intervention, Comparator, Outcomes, Statistical analysis) framework [10]. Before following the process here, it is recommended to consider first whether an AD is suitable. For example, if the outcome measure on which the adaptations are based is not observed sufficiently quickly, then an AD is unlikely to provide improved utility [11].

Assuming an AD is suitable, it is helpful to first map out a recruitment strategy (including estimated sample size, number of sites, length of recruitment), a Gantt chart and the tasks that are required to support a clinical trial, regardless of whether an AD is used. We show some major tasks common to trials in Table 1 and how ADs impact on staff and non-staff resources in Table 2.

Table 1 Major tasks required to run a clinical trial and how adaptive designs may affect them
Table 2 Resources where adaptive designs increase use

For each task, it is useful to consider the workload required by broad categories of staff. The broad staff categories identified in the CAT project were trial management, data management and statistics. However, some institutions will have staff that might cross several categories (such as a statistical programmer) or be a cross-cutting role (e.g. quality assurance) and some trials will require staff from other methodological areas such as health economics not considered here. Some tasks will predominantly cause workload for one staff category (e.g. writing a Statistical Analysis Plan will be the responsibility of statistical staff with substantially less input from staff from other categories) whereas others may involve more than one (e.g. setting up and testing the randomisation system).

Some CTUs who took part in CAT described using their own costing template (also known as a budget planning tool [8]) that captured the amount of work or time required, which makes the costing process easier and more transparent. We have provided an example costing tool, implemented in a spreadsheet, in Additional file 2providing the list of tasks mentioned in Tables 1 and 2.

Some resources are also affected by the institution itself. Some CTUs participating in CAT had their own in-house randomisation and clinical data management system (CDMS); others outsourced these tasks to third parties. These differences in available infrastructure will influence staff workload (across categories) as well as non-staff costs required depending on the adaptive features considered. Some institutions had core infrastructure programme funding that will influence staff resources and non-staff costs required.

Once the trial team has a good understanding of the resource required for the proposed non-adaptive study, it is time to progress to Step 2. We would note that in some cases there may be no equivalent non-adaptive study to serve as a baseline, such as a phase I dose-finding trial, in which case this step may be more difficult.

Step 2 – How will the adaptive design features affect these tasks?

The next step in the process is considering the implications of the AD features (Table 3) on the tasks identified in Step 1 and how this influences the resources required. Clearly, this will depend on the proposed AD and adaptive features or adaptations considered.

Table 3 Additional implications of adaptive design features on resource use

We first consider elements affecting resource requirements that ADs have in common, across the life cycle of a trial: (1) whilst in set-up; (2) during recruitment and follow-up; and (3) at the time of the final analysis and reporting. We then provide some considerations for specific types of ADs.

Trial set-up

Trial set-up involves several tasks that may be affected by the AD given adaptive trials are typically more complex than non-adaptive trials, including writing the protocol, discussion with regulators, applying for regulatory and ethical approvals, development of the database, development of contracts with industry partners, design and development of randomisation systems, and setting up recruitment sites. This means an AD protocol may require more effort (including, for example, fully describing the design and its statistical properties, potentially with additional documents summarising simulations) and extra time may be needed for approvals and site set-up. Based on the experience of the authors, ethics committees and (if applicable) regulators may have queries about the design that require clarifications or amendments and resubmission; extra effort may be required to develop patient information sheets (PIS) that explain the design in a clear, concise and accurate way without giving away too much information that could potentially introduce biases in the conduct of the trial.

During recruitment and follow-up: conducting interim analyses and implementation of adaptations

A common property of ADs is that they involve one or more interim analyses whilst recruitment is ongoing and, depending on the results, implementing changes.

Interim analyses require high-quality data on the variables to be used to make adaptation decisions in a timely manner. In some cases, this may be a single outcome variable, but other designs might use a range of outcome variables. As an example, the decision-making at the interim analysis in MIDFUT, an adaptive multi-arm multi-stage trial in diabetic foot ulcer [17], involves an efficacy outcome, as well as safety data and early cost-effectiveness. Resources required for ensuring data are as accurate and complete as possible, known as data cleaning, must be done by the time of each interim analysis; any additional burden on site staff, trial managers, and data managers in the time leading up to the interim analyses should be considered.

Interim analyses also require additional statistical resource to undertake the analyses on the cleaned dataset. As well as the time to do this, additional tasks may be required. For example, an interim statistical analysis plan (SAP) would be required to unambiguously lay out the planned analysis methods in advance of data being available. This is a formal requirement for a Clinical Trial of an Investigational Medicinal Product (CTIMP), and may also require a statistician not involved in the design or conduct of the trial to prepare the interim analyses, allowing the Trial Statistician to remain blind [18] to arm allocation; in other circumstances where this is not a formal requirement, it still may be desirable to ensure that trial results do not influence the conduct of the final analysis. The time for any additional statisticians to understand the complexities of the design should also be considered. Statisticians may also be involved in the data cleaning process, such as in identifying outliers to be checked with sites. All of these factors would increase the resources required for statistical staff.

In some cases, experts in other areas might be required for the interim analysis. An example is the STOP-OHSS group sequential trial [19], assessing the clinical and cost-effectiveness of early active management of ovarian hyperstimulation syndrome compared to usual care. One interim analysis for non-binding futility early stop** is planned. A health economist was involved in the trial design by develo** a preliminary model that informed the choice of an appropriate interim futility stop** rule. They will develop interim and final Health Economic Analysis Plans (HEAPs) and update the preliminary health economic model at an interim analysis [20].

After an interim analysis is completed, there may be pre-planned changes required to the sample size and trial systems. This may involve: changes to the CDMS and randomisation systems; making substantial amendments to the protocol and other trial documents and associated regulatory approvals [21, 22]; and implementing changes in trial sites. Depending on the nature of the changes required, this may increase resources required for CTU staff, or require increased costs to make changes to any outsourced systems.

Final analysis and reporting

Analysis and reporting will be influenced by the AD, adaptive features, and interim decisions made. The CONSORT extension for randomised adaptive trials [23], which provides guidance on clear reporting of all parts of the trial, is a useful resource to consider.

Overall numbers of data queries and the amount of cleaning around the time of the final analysis may be reduced due to having been brought forward to interim analyses. However, statistical analyses for ADs, especially estimation of treatment effects and related quantities such as confidence intervals, can be more complex than for traditional designs—see Robertson et al. [21, 22]; and conducting final analyses for closed arms. There may also be an impact on the time needed to recruit remaining participants if the sample size is specified per arm. Early stop** of arms or the trial may lead to a reduction in overall costs required without compromising the trial’s integrity and validity.

Other ADs, like sample size re-estimation, may potentially increase the target sample size, and therefore trial duration. The change in resources required in this circumstance will depend on how this influences the tasks required. All CTUs who took part in the CAT mock costing exercise provided estimated resources based on the maximum target sample size, and the time needed to recruit to this. This is useful to inform the maximum resource needs. However, one must consider all potential scenarios that can occur, as well as the likelihood of each scenario, and how this will influence the resources required by the trial. In the more theoretic ADs literature, it has been a rule of thumb that quantities such as the ‘average sample size’ (i.e. the sample sizes associated with all possible scenarios, weighted by probabilities of each scenario occurring) can be a good representation of the efficiency of an AD. These metrics are based on multiple hypothetical trials and are not necessarily helpful for the specific trial being costed. They also do not take into account the fixed costs of starting and stop** the trial and the need to have funds available to cover all possible scenarios.

For designs that may lead to a high variation in the resources required, the host institution must carefully consider the implications of this. As in Step 2, larger organisations with many projects and staff may be better able to deal with uncertainty than smaller organisations.

It is important to remember that uncertainty is not unique to ADs. In most trials, there is uncertainty around feasibility aspects such as recruitment rates, and whether specified milestones will be met (e.g. as monitored in the internal pilot phase of a trial [29]). Most non-adaptive trials can also be stopped early, e.g. due to safety concerns.

Step 4 – What are the funder expectations?

Currently, most funders of academic or public sector trials (in the UK) provide limited flexibility in specifying the costs requested in a funding application. A single figure is required which is considered by the funding panel deciding on whether there is value for money provided by the research. This figure would generally be the maximum required by the trial. If an adaptive trial is highly variable in its cost (e.g. depending on how many arms stop early, or whether the recruitment target increases following a sample size re-estimation) then this maximum may make the research look expensive, but would minimise the risk of needing to return to a funder for additional resources to complete a trial, and also minimise the risk of an inconclusive trial result. In the CAT project, we found that CTUs presented this maximum amount to ensure the research could be delivered in the ‘worst-case scenario’ that requires the maximum sample size.

There should be opportunities to insert more details in the application form to provide an estimate of cost savings if a particular change happens. Most funders have a justification of costs section, which would allow including estimation of cost savings in certain scenarios. In addition, some funders of trials (e.g. the Medical Research Council (MRC) Developmental Pathway Funding Scheme, DPFS [28]) currently require projects to be split into milestones, each with an associated cost. This may allow better specification of the uncertainty in the cost of an AD if milestones are linked to interim analyses and encourage organisations to plan ahead. This last approach has some limitations, such as not allowing alternative paths of milestones depending on the results of an early milestone.

An alternative approach is that funders allow requesting funded extensions of research or variations to contracts. In our experience with UK funders, this has not been commonly encouraged as a way to handle uncertainty in ADs and does not provide certainty of funding in the worst-case scenario. It may, however, be a useful way to handle sample size re-estimation designs or platform trials that can add in new arms.

It is vital to bear in mind the funder requirements and flexibility of the application form when deciding how to best present the uncertainty of the resources required.

Step 5 – Justifying and refining the design

Considering Steps 2–4 will determine how the design influences the resources required and how this can be communicated to the funder. It is important then to consider how different aspects of the design could be modified to retain most or all of the benefits whilst minimising any statistical or resource issues that arise. This may include considering the number of interim analyses, reconsidering the types of adaptations that are implemented, and the specific decision rules that are specified. As well as considering the resources required by the trial design, it is important to consider the quality of evidence provided by the trial also.

The methods of the value of information analysis (VOIA) may be useful to help consider the benefits and the associated costs of collecting more information to inform (and reduce the uncertainty of) a technology adoption decision [30, 31]. Currently, there is limited use of this approach in practice in the context of ADs [32] as well as non-adaptive designs [33]. However, VOIA methods can be used to quantify the value of non-ADs and alternative ADs allowing the comparison of multiple designs. This can help the research team to refine and justify their chosen design [34, 35]. A clear understanding of the costs of the trial is required for these analyses, which is facilitated by following Steps 1–4.

Example

Table 4 presents an example of how a funded adaptive trial could have followed the five-step process.

Guidance for funders

Funders of clinical trials benefit from the appropriate use of ADs as they provide higher efficiency and more robust evidence that ultimately benefits patients. In our experience, some funders have been more encouraging than others in the use of innovative designs [36]. Nevertheless, there are some barriers imposed that may stifle methodological innovation; some of these were raised earlier in this article.

Currently, some aspects of public funding of trials may penalise innovative designs. By only allowing limited space for specifying the requested funding, it may be necessary to focus on the maximum ‘worst-case scenario’ cost which may make the trial look expensive. Having distinct funding panels that typically focus on a particular phase of trial makes getting funding for seamless designs, spanning multiple phases, difficult. Funding agreements that penalise investigators for stop** a trial early because they have answered the research question quicker (i.e. by completely cutting off funding) make some efficient designs unappealing to academic organisations. Furthermore, the additional resources required pre-funding application are usually unfunded, often making organisations reliant on core funding when exploring an adaptive trial design. Funders could promote more use of ADs through making infrastructure awards that would allow further capacity to develop them.

There are some ways in which funders could encourage more innovation by allowing more flexibility in their awards and application processes (Table 5).

Table 5 Recommendations to funders to encourage increased appropriate use of innovative designs

We would also advise funders, in collaboration with applicants, to consider whether overly complex ADs could be simplified (with resource savings) without loss of benefit [11]. Conversely, funders should encourage applicants to add adaptive elements if they would be beneficial to information or patients.

We would like to highlight that several funders have made great progress in addressing barriers to the wider use of innovative designs. It is also understandable that some funders operate under considerable constraints (e.g. annual budgets that cannot be carried forward) that make it more difficult to address some barriers.

Discussion

Once there is a compelling reason that an AD brings better and quicker evidence, resourcing and justifying it is a time-consuming process. Our five-step approach outlined here provides structure to the process. This complements literature covering the process of designing adaptive trials such as [3, 4, 38], addresses barriers raised in the implementation of ADs [39, 40], resourcing of clinical trials [41,42,43] and investigation of additional resources required to support adaptive trials [7, 8].

We have restricted attention to resourcing the trial after it is funded. Like any clinical trial, the process of designing an adaptive trial requires a substantial amount of input from a multidisciplinary team. However, more specialist expertise and a greater amount of time is generally required for an AD. This is difficult to resource other than from institutional core funding for trial development infrastructure. Some funders may offer development grants which would help cover this additional work upfront.

We would emphasise that ADs can provide many benefits that may outweigh cost considerations. These include lower average time taken to complete the trial, better outcomes for participants recruited to the trial, and higher-quality evidence provided by the trial. In addition, the apparent higher costs of supporting an AD may just be a ‘worst-case-scenario’ and be offset by substantially reduced costs if the trial finishes early. The only work we are aware of that investigated differences in cost between adaptive and non-adaptive trials in practice is Martin et al. [8], which investigated several different cost-drivers using regression models. The authors did not find a significant cost difference between adaptive and non-adaptive designs although it is likely to be difficult to estimate reliably as AD use may be different by phase and indication.

The key factor that justifies an AD is the ratio of benefit it provides (both to the efficiency and patient benefit of the trial itself and the long-run impact of the evidence generated) to the additional cost it incurs [44, 45]. Research that can inform and improve both parts of this ratio is needed. Trial methods that can maintain or improve the speed and quality of interim analyses whilst reducing the resources required would be very useful; methods that may improve the benefit provided without requiring more resources would similarly be of great interest. Overall, a framework for better quantifying the benefit of an AD in the presence of real-world issues such as delay in assessment of outcomes [11] would help justify this to funders.

ADs provide advantages and complexities for other types of analyses too. In our CAT research, and this guidance, we have not highlighted areas such as health economics, qualitative research and evidence synthesis. Previous work, e.g. Flight et al. [32] has noted the impact of ADs on health economic considerations. However, further work is needed for investigating how ADs affect the resources required for health economists and other methodological disciplines. Additional resources may be required for the design of the adaptive trial, contributing to interim decision making and appropriately analysing the final data to account for the AD.

As the recommendations in this paper are based on investigation of UK academic CTUs, we acknowledge they are most relevant to academic trials run in the UK. Co-authors of this paper have experience with international academic funders including the European Commission, the Deutsche Forschungsgeneinschaft (DFG, Germany), National Institutes of Health (NIH, USA), National Health and Medical Research Council (NHMRC, Australia) and Fight Kids Cancer (Belgium) suggesting that the issues identified here are very similar. With trials funded by large international pharmaceutical companies, some of the guidance will be relevant but it is likely that there is much more of a focus on average cost, power of trials, and portfolio optimisation. We would welcome further papers that consider how our recommendations may be best tailored to fit trials outside of the UK and run by industry including smaller companies.

The evolution of application forms by funders that would enable a rationale for costs to sit alongside the design choices, paired with greater flexibility in the way costs are presented to funders, could improve transparency whilst enabling the many benefits of innovation in trial design to be realised more broadly in clinical research through the funding of more trials using ADs. It is also important that innovation in trial designs that can lead to improved efficiency, quality of evidence, and patient benefit are incentivised by funders.

Although we have concentrated on ADs, the process could be used for other innovative approaches. For example, master protocols [9] (including basket trials and umbrella trials, and platform and living protocols) are not necessarily adaptive but may require similar considerations of appropriately resourcing them. Platform trials, which offer the opportunity to add in new arms, may require considering the additional costs incurred from the additional arm compared with the fixed costs of the platform.

Through better guidance on appropriately resourcing ADs, we hope that their use can continue to increase, which will play an important role in improving patient outcomes and improving research productivity.