24-rare-disease-trials: 1
Blinded sample size re-estimation accounting for estimation error with small internal pilot studies
Hirotada Maeda1, Satoshi Hattori1, Tim Friede2
1Graduate School of Medicine Division of Medicine, Osaka University; 2Department of Medical Statistics, University Medical Center Göttingen
In randomized controlled trials, it is important to set the target sample size accurately at the design stage for the study to be conclusive. However, the sample size formula often requires specification of some parameters other than the treatment effect, which are often referred to as nuisance parameters. Their misspecification can lead to studies with insufficient power. For example, in comparing two normal populations, specification of the standard deviation is influential to the power of the final analysis. Blinded sample-size re-estimation is an approach to avoid inaccurate sample size calculation. Kieser and Friede (2003) proposed to use the one-sample variance, which can be estimated in a blind review without knowledge of the treatment allocation. We point out that their method is regarded as a worst-case evaluation with the largest variance subject to information under blinded treatment allocation and then can likely avoid underpowered studies. However, as they reported, with blinded reviews of small sample size, it still may lead underpowered studies. We propose a refined method accounting for estimation error in blind reviews using confidence intervals of the one-sample variance. We developed a method to select an appropriate confidence level so that the re-estimated sample size attains the target power. The idea is related to the sample size calculation method with pilot studies by Kieser and Wassmer (1996). The required confidence level can be prespecified in the protocol and coupled with the blinded one-sample variance estimate, one can determine the sample size for the final analysis of the target power to detect the pre-specified treatment effect, maintaining study integrity. We conducted numerical studies to evaluate the performance of the proposed method and concluded that our method worked well as designed and outperformed existing methods.
24-rare-disease-trials: 2
Randomization in clinical trials with small sample sizes using group sequential designs
Daniel Bodden1, Ralf-Dieter Hilgers1, Franz König2
1Institute of Medical Statistics, RWTH Aachen University, Aachen; 2Institute of Medical Statistics, Center for Medical Data Science, Medical University of Vienna
Background
Group sequential designs, which allow early stopping for efficacy or futility, may benefit from balanced sample sizes at interim and final analyses. This requirement for balance limits the choice of admissible randomization procedures. We investigate if the choice of randomization procedure, whether balanced or not, impacts the type I error probability and power in trials with group sequential designs.
Methods
We investigate the impact of randomization procedures on the type I error probability and power of trials with Pocock, O'Brien-Fleming, Lan-DeMets and inverse normal combination test designs.
Results
Simulation results demonstrate that deficiencies in in the implementation of randomization can inflate type I error rates. Some combinations of group sequential designs and randomization procedures cause a loss of power, for example, when using inverse normal combination tests.
Conclusion
We propose a framework for selecting the most suitable combinations of group sequential design and randomization procedure. When the planned balanced allocation ratio in (interim) analyses cannot be ensured, the Lan-DeMets approach is preferable for small sample trials due to its robustness to deviations between the planned and observed allocation ratio. The inverse normal combination test, while useful in trials with limited prior information, should be used cautiously with permuted block randomization that maintains the planned allocation ratio to avoid power loss.
24-rare-disease-trials: 3
Adjusting for allocation bias in stratified clinical trials with multi-component endpoints
Stefanie Schoenen1, Ralf-Dieter Hilgers1, Nicole Heussen2
1RWTH Aachen, Germany; 2Sigmund Freud Private University, Vienna, Austria
Background
The disease heterogeneity and geographic dispersions of patients in rare diseases often necessitates a multi-centre design and the use of multi-component endpoints, which combine multiple outcome measures into a single score. A common issue in these trials is allocation bias, as trials in rare diseases are frequently unblinded or single-blinded. Allocation bias occurs when future allocations can be predicted based on previous ones, potentially leading to the preferential assignment of patients with specific characteristics to either the treatment or control group. The ICH E9 guideline recommends assessing the potential contributions of bias to inference. Therefore, our research aims to develop a bias-adjusted analysis strategy for stratified clinical trials with multi-component endpoints.
Methods
To model biased patient responses, we derived an allocation biasing policy based on the convergence strategy of Blackwell and Hodges [1], which assumes that the next patient will be allocated to the group with fewer prior assignments. Using this policy, we formulated a bias-adjusted analysis strategy for a stratified version of the Wei-Lachin test, which is a combination of Fleiss's stratified test and the Wei-Lachin test [2,3].
Through simulations, we assess the impact of allocation bias on the type I error rate of the stratified Wei-Lachin test, both with and without bias adjustment, and evaluate how statistical power is affected when accounting for allocation bias.
Results
Allocation bias increases the type I error rate of the stratified Wei-Lachin test, potentially exceeding the 5% significance level. Therefore, if allocation bias is a concern, a bias-adjusted analysis should be conducted as a sensitivity analysis to ensure valid results. The bias-adjusted stratified Wei-Lachin test maintains the 5% significance level while preserving approximately 80% power under both unbiased and biased conditions. In contrast, the unadjusted test shows an inflated power exceeding 80% in the presence of bias, leading to an overestimation of the true treatment effect.
Conclusion
Conducting a bias-adjusted test as sensitivity analysis improves the validity of trial results. The proposed methodology enhances the robustness of rare disease clinical trials, ensuring more reliable and accurate conclusions.
[1] Blackwell D, Hodges JL. Design for the Control of Selection Bias. The Annals of Mathematical Statistics.1957;28(2):449–60.
[2] Fleiss JL. Analysis of data from multiclinic trials. Controlled Clinical Trials.1986;7(4):267–275.
[3] Wei LJ, Lachin JM. Two-Sample Asymptotically Distribution-Free Tests for Incomplete Multivariate Observations. Journal of the American Statistical Association.1984;79(387):653–661.
24-rare-disease-trials: 4
Methodological insights from the EPISTOP trial for designing and analysing clinical trials in rare diseases
Stephanie Wied, Ralf-Dieter Hilgers
RWTH Aachen University, Germany
Background
The most suitable method for assessing the impact of an intervention in clinical research is to conduct a randomised controlled trial (RCT). However, implementing an RCT can be challenging, especially in small population groups. These challenges can arise during the planning phase of a clinical trial and may occur later, when potential solutions may no longer be feasible. The EPISTOP trial aimed to compare outcomes in infants with tuberous sclerosis (TSC) who received vigabatrin preventively before seizures with those who were treated conventionally after seizure onset [1]. The study was designed as a prospective, multicentre, randomised clinical trial. However, ethics committees at four centres did not approve this RCT design, resulting in an open-label trial (OLT) in these four centres.
Methods
We investigate whether randomisation introduced any bias in the EPISTOP trial and how to address the presence of different types of data (RCT and OLT data) within the context of clinical trials. To support and strengthen the published results, we re-analyse the data from the EPISTOP trial using a bias-corrected analysis [2]. The statistical model includes a term representing the effect of selection bias as a factor influencing the corresponding endpoint. As a result, the treatment effect estimates for the primary endpoint of time to first seizure, as well as secondary endpoints are adjusted for the impact of bias.
Results
The bias-corrected analyses for the primary endpoint indicate quite similar estimated hazard ratios and associated confidence intervals for original and bias-corrected analysis (original: HR 2.91, 95%-CI [1.11 to 7.67], p-value 0.0306; bias-corrected: HR 2.89, 95%-CI [1.10 to 7.58], p-value 0.0316). This consistency was also observed in the secondary endpoints. Therefore, the statistical reanalysis of the raw study data supports the published results and does not demonstrate additional bias related to randomisation.
Conclusion
In summary, it becomes clear that the prevention and quantification of bias should be taken into account in future clinical studies to ensure reliable study results.
References
[1] Kotulska, K. et al. (2020), Prevention of Epilepsy in Infants with Tuberous Sclerosis Complex in the EPISTOP Trial. Ann Neurol, 89: 304-314. https://doi.org/10.1002/ana.25956
[2] Wied, S. et al. (2024) Methodological insights from the EPISTOP trial to designing clinical trials in rare diseases - A secondary analysis of a randomized clinical trial. PLOS ONE 19(12). https://doi.org/10.1371/journal.pone.0312936
24-rare-disease-trials: 5
Modified crossover trials to improve feasibility of evaluating multiple treatments for rare relapsing-remitting conditions
James Wason
Newcastle University, United Kingdom
Background: It is challenging to conduct well-powered clinical trials for rare diseases due to the limited number of patients available to recruit. Trial designs that are statistically efficient and appealing to potential trial participants make a big difference to how feasible the trial is to conduct.
For chronic relapsing-remitting conditions, crossover trials are well-established for treating participants with multiple interventions, in sequence. They are highly statistically efficient, however may be off-putting to participants as they involve stopping a treatment at a specified point, even if it is providing benefit.
This presentation discusses a modified crossover trial design, developed for the BIOVAS trial. BIOVAS assessed the effect of three biologic therapies vs placebo for patients with non-ANCA associated vasculitis. The design used a time-to-event outcome representing occurrence of disease flare (recurrence of symptoms), with participants moving on to the next treatment in the sequence after flare occurs. In this way, participants remain on a treatment whilst they are benefitting.
Methods:
Using a simulation study, the statistical properties of the trial design are shown assuming a mixed-effects Cox regression model is used to analyse the trial. Considerations on how blinding can be implemented are provided. The development of two newer, in VEXAS syndrome and Juvenile Scleroderma, using a similar design will be highlighted.
Results:
Simulation studies showed no evidence of type I error rate inflation or non-negligible statistical bias in realistic situations. Careful consideration of blinding is necessary to ensure participants do not become unblinded during the sequence.
Conclusion:
This modified crossover design improves patient acceptability by allowing continued benefit from treatment while maintaining high statistical efficiency.
|