Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Randomization and analysis of clinical trials
Time:
Wednesday, 27/Aug/2025:
2:00pm - 3:30pm

Location: Biozentrum U1.131

Biozentrum, 190 seats

Show help for 'Increase or decrease the abstract text size'
Presentations
37-random-analysis-trials: 1

Distributive randomization: a pragmatic design to evaluate multiple simultaneous interventions in a clinical trial

Skerdi Haviari1,2, France Mentré1,2

1Université Paris Cité, Inserm, IAME, 75018 Paris, France; 2Département Epidémiologie Biostatistiques Et Recherche Clinique, AP-HP, Hôpital Bichat, 75018 Paris, France

Background: In some medical indications, numerous interventions have a weak presumption of efficacy, but a good track record or presumption of safety. This makes it feasible to evaluate them simultaneously. This study evaluates a new design that randomly allocates a pre-specified number of interventions to each participant, and statistically tests main intervention effects. We compare it to factorial trials, parallel-arm trials and multiple head-to-head trials, and derive some good practices for its design and analysis. We extend the approach by varying the number of interventions from one patient to the next for the simultaneous evaluation of an intervention program, comprising several components, and each component individually, at the same time, enabling proper contrasts.

Methods: We simulated various scenarios involving 4 to 20 candidate interventions/components among which 2 to 8 could be simultaneously allocated. A binary outcome was assumed. One or two interventions were assumed effective, with various interactions (positive, negative, none). Efficient combinatorics algorithms were created. Sample sizes and power were obtained by simulations in which the statistical test was either difference of proportions or multivariate logistic regression Wald test with or without interaction terms for adjustment.

Results: Distributive trials reduce sample sizes 2- to 7-fold compared to parallel arm trials. An unexpectedly effective intervention causes small decreases in power (< 10%) if its effect is additive, but large decreases (possibly down to 0) if not, as for factorial designs. These large decreases are prevented by using interaction terms to adjust the analysis, but these additional estimands have a sample size cost and are better pre-specified. The issue can also be managed by adding a true control arm without any intervention, or by exploiting the variance-covariance matrix, which is all the more useful for the multi-component intervention use case.

Conclusion: Distributive randomization is a viable design for mass parallel evaluation of interventions in constrained trial populations. It should be introduced first in clinical settings where many undercharacterized interventions are potentially available, such as disease prevention strategies, digital behavioral interventions, dietary supplements for chronic conditions, or emerging diseases. Pre-trial simulations are recommended, using publicly available code.



37-random-analysis-trials: 2

Forced Randomisation – a powerful, sometimes controversial, tool for multi-centre RCTs

Johannes Krisam1, Kerstine Carter2, Olga Kuznetsova3, Volodymyr Anisimov4, Colin Scherer5, Yevgen Ryeznik6, Oleksandr Sverdlov7

1Boehringer Ingelheim Pharma GmbH & Co.KG (Ingelheim, Germany); 2Boehringer Ingelheim Pharmaceuticals Inc. (Ridgefield, CT, USA); 3Merck & Co. Inc. (Rahway, NJ, USA); 4Amgen Ltd. (London, United Kingdom); 5Rensselaer Polytechnic Institute (Troy, NY, USA); 6Uppsala University (Uppsala, Sweden); 7Novartis Pharmaceuticals Corporation (East Hanover, NJ, USA)

During the enrolment period of a randomised controlled trial, drug supply at a site may run low, such that a site does not have medication kits from all types available. In case an eligible patient is to be randomised to a treatment for which no kits are currently available, two options are possible: Either send that patient home, which might be deemed as ethically questionable; Or allocate the patient to a treatment arm with available kits at the site, using a built-in feature of the interactive response technology (IRT) system, called forced randomisation (FR). In the pharmaceutical industry, there is a general consensus that using FR might be acceptable, given that there are “not too many” instances of FR. Furthermore, FR could be considered at odds with the ICH E9 guidance [1], which states that ”[t]he next subject to be randomised into a trial should always receive the treatment corresponding to the next free number in the appropriate randomisation schedule (in the respective stratum, if randomisation is stratified)”. Unfortunately, a clear guidance on under what instances FR is acceptable is currently lacking.

This talk will present recent work covering the potential benefits that can be garnered from the use of forced randomisation under various forcing, and supply strategies [2]. The impact on important characteristics of the clinical trial, such as the balance in sample size between treatment arms, the number of patients sent home, the duration of the trial as well as the drug overage will be discussed. In addition, potential ways on how to address forced randomisation in the statistical analysis will be assessed, and the impact of forced randomisation on the type I error rate of a trial will be investigated under several scenarios.

  1. ICH Harmonised Tripartite Guideline E9. Statistical Principles for Clinical Trials (1998). https://database.ich.org/sites/default/files/E9_Guideline.pdf
  2. Carter K, Kuznetsova O, Anisimov V, Krisam J, Scherer C, Ryeznik Y, Sverdlov O (2024). Forced randomization: the what, why, and how. BMC Med Res Methodol 24(1):234. doi: 10.1186/s12874-024-02340-0.


37-random-analysis-trials: 3

Type I error rate control in adaptive platform trials when including non-concurrent controls in the presence of time trend

Jinyu Zhu1, Peter Kimani1, Nigel Stallard1, Andy Metcalfe1, Jeremy Chataway2, Keith Abrams1

1University of Warwick, United Kingdom; 2University College London, United Kingdom

Background: Platform randomized controlled trials (RCTs) have gained increased popularity during and after pandemic due to their efficiency and resource-saving capabilities. These trials allow the addition or removal of experimental treatments at any stage. Usually, only concurrent controls are used when an added experimental treatment arm is compared to the control arm. However, platform trials offer the opportunity to include non-concurrent controls, which are controls recruited before the added experimental arm entered the trial. Using non-concurrent controls increases power but requires considering time trends because ignoring time trends introduces bias in making inference on treatment effects.

Methods: Lee and Wason (2020) proposed fitting a linear model with terms for time trend, which was later extended by Bofill Roig et al. (2022). This, however, does not account for interim analyses associated with adaptive platform RCTs. We build on this model to show how to compute the interim and final analyses boundaries for testing the effect of an added treatment that control the type I error rate. We first demonstrate that using a linear model with additional parameters for added treatments as the RCT progresses, the test statistics for testing a treatment effect at different interim analyses, have the joint canonical distribution assumed in group sequential methods (Jennison and Turnbull, 1997). Then we identify boundaries that control the type I error rate for any values of the true effects of experimental arms introduced earlier in the trial.

Results: Our findings show that borrowing information from non-concurrent controls can improve power, though it depends on possible decisions at interim analyses (futility stopping only, efficacy stopping only or both) and effects of the experimental treatments that entered the trial earlier. Also, the boundaries can be optimized in terms of power.

Conclusion: This study establishes a method to control type I error while optimising power in platform RCTs with non-concurrent controls, accounting for both time trend and interim analyses.

Reference

Bofill Roig et al., 2022. On model-based time trend adjustments in platform trials with non-concurrent controls. BMC medical research methodology, 22(1), p.228.

Jennison and Turnbull, 1997. Group-sequential analysis incorporating covariate information. Journal of the American Statistical Association, 92(440), pp.1330-1341.

Lee and Wason, 2020. Including non-concurrent control patients in the analysis of platform trials: is it worth it?. BMC medical research methodology, 20, pp.1-12.



37-random-analysis-trials: 4

Data-driven controlled subgroup selection in clinical trials

Manuel M. Müller1, Konstantinos Sechidis2, Björn Bornkamp2, Frank Bretz2, Fang Wan3, Wei Liu4, Henry W. J. Reeve5, Timothy I. Cannings6, Richard J. Samworth1

1Statistical Laboratory, University of Cambridge, Cambridge, United Kingdom; 2Advanced Methodology and Data Science, Novartis Pharma AG, Basel, Switzerland; 3Department of Mathematics and Statistics, Lancaster University, Lancaster, United Kingdom; 4School of Mathematical Sciences, University of Southampton, Southampton, United Kingdom; 5School of Mathematics, University of Bristol, Bristol, United Kingdom; 6School of Mathematics, University of Edinburgh, Edinburgh, United Kingdom

Background and Introduction.
Subgroup selection in clinical trials is essential for identifying patient groups that may benefit differently from a treatment, thereby enhancing personalized medicine. Additionally, it can identify patient groups that encounter adverse events after treatment. However, these post-selection inference problems pose challenges, such as increased Type I error rates and potential biases from data-driven subgroup identification. In this paper, we present and extend two recently developed tools for subgroup selection in regression problems: one based on generalized linear modelling (GLM) (https://doi.org/10.1002/sim.9996) and another on nonparametric monotonicity constraints (https://doi.org/10.1093/jrsssb/qkae083). These methods alleviate the above reliability concerns and we demonstrate how these methods can be extended and applied to address questions regarding treatment effect heterogeneity.

Methods.
To evaluate these methods’ effectiveness and reliability in clinical settings, we conduct a thorough simulation study in which the data distributions mimic those of real data sets; one of which is observational data and one data from a randomized controlled trial. Finally, we apply the methods to the data of the original two clinical trials to address two distinct questions: identifying patient groups that manifest adverse events and identifying patient groups that experience enhanced treatment effects, while controlling for Type I error in both cases.

Results.
We assess how well the methods retain Type I error rate control under violation of their respective assumptions. We find that while the GLM-based method is less robust against such violations compared to the monotonicity-based approach, when its underlying modelling assumptions hold it has higher power. However, a more fine-grained analysis suggests that while the very strict notion of Type I error is violated more easily in the parametric setting, the story is less clear for nuanced measures of reliability. Furthermore, we illustrate the suitability of the examined methods when applied on top of the meta-learning framework popular in evaluating conditional average treatment effects.

Conclusions.
We conclude that recent methods for controlled subgroup selection exhibit a trade-off in their reliability and power which parallels that between parametric or nonparametric methods elsewhere in statistics. Our study investigates the extend of these effects, which should serve as useful guidance for using controlled subgroup selection approaches in other applications. Of particular interest is the difference between the different measures of reliability we consider, the appropriate choice of which will strongly depend on the real-world application at hand.



37-random-analysis-trials: 5

Debunking the myth: Random block sizes do not decrease selection biases in open-label clinical trials

Wenle Zhao

Medical University of South Carolina, United States of America

Background:

The permuted block design for subject randomization in clinical trials is highly susceptible to selection biases due to its predictability patterns, particularly when investigators are aware of the block size. This predictability can influence their enrollment decisions, introducing selection biases, especially in open-label trials. The random block design aims to mitigate this issue by incorporating randomly varying block sizes, expecting that without knowing the block size, investigators will be less likely to make treatment predictions. For this reason, it has been recommended by the ICH E9 Statistical Principles for Clinical Trials. However, close examinations revealed that 100% certainty is not a necessary condition for making treatment predictions; any prediction with correct guess probability above pure random can result in selection biases.

Methods:

Our recent research provides an analytical framework for the assessment of allocation predictability for infinite and finite allocation sequences. The convergence guessing strategy, proposed by Blackwell and Hodges in 1957, offers a more realistic measure of selection bias compared to prediction with certainty. Using the correct guess probability as a metric, we evaluated the selection bias of random block design and compare it with the permuted block design and alternative randomization designs, all under the same restriction of maximum tolerated imbalance.

Results:

Quantitative assessments indicate that, among all restricted randomization designs with the same restriction of maximum tolerated imbalance, the random block design exhibits the highest risk of selection bias. For instance, in a two-arm equal allocation trial with the maximum tolerated imbalance of 3 (block size of 6), the average selection bias risk is 68.33% for the permuted block design and 70.28% for the random block design, while the big stick design and the block urn design have 58.23% and 62.35% respectively.

Conclusion:

Replacing permuted block design with random block design increases the risk of selection biases and should not be recommended to use in open label trials. Instead, the big stick design and the block urn design offer superior protection against selection bias and therefore are recommended to be used in open label trials.

References:

  1. Zhao W, Carter K, Sverdlov O, et al. Steady-state statistical properties and implementation of randomization designs with maximum tolerated imbalance restriction for two-arm equal allocation clinical trials. Stat Med. 2024;43(6):1194-1212.
  2. Zhao W, Livingston S. Allocation Predictability of Individual Assignments in Restricted Randomization Designs for Two-Arm Equal Allocation Trials. Stat Med. 2025;44(3-4):e10343.


 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ISCB46
Conference Software: ConfTool Pro 2.6.154+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany