Session | ||
Randomization and analysis of clinical trials
| ||
Presentations | ||
37-random-analysis-trials: 1
Distributive randomization: a pragmatic design to evaluate multiple simultaneous interventions in a clinical trial 1Université Paris Cité, Inserm, IAME, 75018 Paris, France; 2Département Epidémiologie Biostatistiques Et Recherche Clinique, AP-HP, Hôpital Bichat, 75018 Paris, France Background: In some medical indications, numerous interventions have a weak presumption of efficacy, but a good track record or presumption of safety. This makes it feasible to evaluate them simultaneously. This study evaluates a new design that randomly allocates a pre-specified number of interventions to each participant, and statistically tests main intervention effects. We compare it to factorial trials, parallel-arm trials and multiple head-to-head trials, and derive some good practices for its design and analysis. We extend the approach by varying the number of interventions from one patient to the next for the simultaneous evaluation of an intervention program, comprising several components, and each component individually, at the same time, enabling proper contrasts. Methods: We simulated various scenarios involving 4 to 20 candidate interventions/components among which 2 to 8 could be simultaneously allocated. A binary outcome was assumed. One or two interventions were assumed effective, with various interactions (positive, negative, none). Efficient combinatorics algorithms were created. Sample sizes and power were obtained by simulations in which the statistical test was either difference of proportions or multivariate logistic regression Wald test with or without interaction terms for adjustment. Results: Distributive trials reduce sample sizes 2- to 7-fold compared to parallel arm trials. An unexpectedly effective intervention causes small decreases in power (< 10%) if its effect is additive, but large decreases (possibly down to 0) if not, as for factorial designs. These large decreases are prevented by using interaction terms to adjust the analysis, but these additional estimands have a sample size cost and are better pre-specified. The issue can also be managed by adding a true control arm without any intervention, or by exploiting the variance-covariance matrix, which is all the more useful for the multi-component intervention use case. Conclusion: Distributive randomization is a viable design for mass parallel evaluation of interventions in constrained trial populations. It should be introduced first in clinical settings where many undercharacterized interventions are potentially available, such as disease prevention strategies, digital behavioral interventions, dietary supplements for chronic conditions, or emerging diseases. Pre-trial simulations are recommended, using publicly available code. 37-random-analysis-trials: 2
Forced Randomisation – a powerful, sometimes controversial, tool for multi-centre RCTs 1Boehringer Ingelheim Pharma GmbH & Co.KG (Ingelheim, Germany); 2Boehringer Ingelheim Pharmaceuticals Inc. (Ridgefield, CT, USA); 3Merck & Co. Inc. (Rahway, NJ, USA); 4Amgen Ltd. (London, United Kingdom); 5Rensselaer Polytechnic Institute (Troy, NY, USA); 6Uppsala University (Uppsala, Sweden); 7Novartis Pharmaceuticals Corporation (East Hanover, NJ, USA) During the enrolment period of a randomised controlled trial, drug supply at a site may run low, such that a site does not have medication kits from all types available. In case an eligible patient is to be randomised to a treatment for which no kits are currently available, two options are possible: Either send that patient home, which might be deemed as ethically questionable; Or allocate the patient to a treatment arm with available kits at the site, using a built-in feature of the interactive response technology (IRT) system, called forced randomisation (FR). In the pharmaceutical industry, there is a general consensus that using FR might be acceptable, given that there are “not too many” instances of FR. Furthermore, FR could be considered at odds with the ICH E9 guidance [1], which states that ”[t]he next subject to be randomised into a trial should always receive the treatment corresponding to the next free number in the appropriate randomisation schedule (in the respective stratum, if randomisation is stratified)”. Unfortunately, a clear guidance on under what instances FR is acceptable is currently lacking. This talk will present recent work covering the potential benefits that can be garnered from the use of forced randomisation under various forcing, and supply strategies [2]. The impact on important characteristics of the clinical trial, such as the balance in sample size between treatment arms, the number of patients sent home, the duration of the trial as well as the drug overage will be discussed. In addition, potential ways on how to address forced randomisation in the statistical analysis will be assessed, and the impact of forced randomisation on the type I error rate of a trial will be investigated under several scenarios.
37-random-analysis-trials: 3
Type I error rate control in adaptive platform trials when including non-concurrent controls in the presence of time trend 1University of Warwick, United Kingdom; 2University College London, United Kingdom Background: Platform randomized controlled trials (RCTs) have gained increased popularity during and after pandemic due to their efficiency and resource-saving capabilities. These trials allow the addition or removal of experimental treatments at any stage. Usually, only concurrent controls are used when an added experimental treatment arm is compared to the control arm. However, platform trials offer the opportunity to include non-concurrent controls, which are controls recruited before the added experimental arm entered the trial. Using non-concurrent controls increases power but requires considering time trends because ignoring time trends introduces bias in making inference on treatment effects. Methods: Lee and Wason (2020) proposed fitting a linear model with terms for time trend, which was later extended by Bofill Roig et al. (2022). This, however, does not account for interim analyses associated with adaptive platform RCTs. We build on this model to show how to compute the interim and final analyses boundaries for testing the effect of an added treatment that control the type I error rate. We first demonstrate that using a linear model with additional parameters for added treatments as the RCT progresses, the test statistics for testing a treatment effect at different interim analyses, have the joint canonical distribution assumed in group sequential methods (Jennison and Turnbull, 1997). Then we identify boundaries that control the type I error rate for any values of the true effects of experimental arms introduced earlier in the trial. Results: Our findings show that borrowing information from non-concurrent controls can improve power, though it depends on possible decisions at interim analyses (futility stopping only, efficacy stopping only or both) and effects of the experimental treatments that entered the trial earlier. Also, the boundaries can be optimized in terms of power. Conclusion: This study establishes a method to control type I error while optimising power in platform RCTs with non-concurrent controls, accounting for both time trend and interim analyses. Reference Bofill Roig et al., 2022. On model-based time trend adjustments in platform trials with non-concurrent controls. BMC medical research methodology, 22(1), p.228. Jennison and Turnbull, 1997. Group-sequential analysis incorporating covariate information. Journal of the American Statistical Association, 92(440), pp.1330-1341. Lee and Wason, 2020. Including non-concurrent control patients in the analysis of platform trials: is it worth it?. BMC medical research methodology, 20, pp.1-12. 37-random-analysis-trials: 4
Data-driven controlled subgroup selection in clinical trials 1Statistical Laboratory, University of Cambridge, Cambridge, United Kingdom; 2Advanced Methodology and Data Science, Novartis Pharma AG, Basel, Switzerland; 3Department of Mathematics and Statistics, Lancaster University, Lancaster, United Kingdom; 4School of Mathematical Sciences, University of Southampton, Southampton, United Kingdom; 5School of Mathematics, University of Bristol, Bristol, United Kingdom; 6School of Mathematics, University of Edinburgh, Edinburgh, United Kingdom Background and Introduction. Methods. Results. Conclusions. 37-random-analysis-trials: 5
Debunking the myth: Random block sizes do not decrease selection biases in open-label clinical trials Medical University of South Carolina, United States of America Background: The permuted block design for subject randomization in clinical trials is highly susceptible to selection biases due to its predictability patterns, particularly when investigators are aware of the block size. This predictability can influence their enrollment decisions, introducing selection biases, especially in open-label trials. The random block design aims to mitigate this issue by incorporating randomly varying block sizes, expecting that without knowing the block size, investigators will be less likely to make treatment predictions. For this reason, it has been recommended by the ICH E9 Statistical Principles for Clinical Trials. However, close examinations revealed that 100% certainty is not a necessary condition for making treatment predictions; any prediction with correct guess probability above pure random can result in selection biases. Methods: Our recent research provides an analytical framework for the assessment of allocation predictability for infinite and finite allocation sequences. The convergence guessing strategy, proposed by Blackwell and Hodges in 1957, offers a more realistic measure of selection bias compared to prediction with certainty. Using the correct guess probability as a metric, we evaluated the selection bias of random block design and compare it with the permuted block design and alternative randomization designs, all under the same restriction of maximum tolerated imbalance. Results: Quantitative assessments indicate that, among all restricted randomization designs with the same restriction of maximum tolerated imbalance, the random block design exhibits the highest risk of selection bias. For instance, in a two-arm equal allocation trial with the maximum tolerated imbalance of 3 (block size of 6), the average selection bias risk is 68.33% for the permuted block design and 70.28% for the random block design, while the big stick design and the block urn design have 58.23% and 62.35% respectively. Conclusion: Replacing permuted block design with random block design increases the risk of selection biases and should not be recommended to use in open label trials. Instead, the big stick design and the block urn design offer superior protection against selection bias and therefore are recommended to be used in open label trials. References:
|