01-efficient-analysis-trials: 1
The Power of Three. A Framework to Guide Analysis of Covariance in Randomised Clinical Trials.
Stephen John Senn1,2,3, Franz Koenig2, Martin Posch2
1University of Sheffield, United Kingdom; 2Medical University of Vienna; 3University of St Andrews
It is generally recognised that incorporating prognostic factors in the model used to analyse a clinical trial can improve the efficiency of estimates. Exactly what the improvement might be has generally either beein investigated asymptotically or by simulation. Here we show that by assuming that covariates are Normally distributed, it is possible to obtain exact theoretical results for any sample size for the three factors that govern efficiency, namely 1) the expected mean square error 2) the variance inflation factor (VIF) & 3) second order precision, that is to say the precision of the estimate of the mean square error.
Fairly obviously, the main influence on the expected mean square error is the partial correlation with outcome of any prognostic factor to be added to the model. Also, obviously second order precision is simply determined by the residual degrees of freedom. However, what influences the VIF is less obvious and we show that it is equal to 1-RZ2, where RZ2 is the coefficient of determination for the treatment indicator, Z, using the covariates as predictors. We given an exact expression for the expected value of the VIF that depends simply on the sample size and the number of predictors. This should greatly assist trialists in planning their analyses.
We also give reasons for believing that these formulae will work well, even for gross violation of Normality by the covariates (for example, categorical covariates) and show a relationship between the VIF and the treatment by category chi-square statistic for any categorical covariate. We suggest that this theoretical framework can 1) provide a means of guiding and interpreting simulation studies 2) shed light on many practical matters, for example whether it is worth adding a covariate to a model, what the value might be of using so-called super-covariates and whether stratifying continuous covariates is sensible.
These results should be of particular interest for trialists working in rare diseases, where patient numbers in trial are low and relying on asymptotic results can be misleading.
01-efficient-analysis-trials: 2
Stirring the pot: combining influence functions and Wald type tests for more powerful closed testing procedures
Christian Bressen Pipper, Klaus Holst
Novo Nordisk A/S, Denmark
We use influence functions of estimators to derive the large sample properties of a Wald type test for the intersection of two superiority hypotheses. This is done via the so-called stacking approach without making any assumptions on the simultaneous behavior of estimators. The resulting test is shown to have good power properties and thus forms the basis of a powerful closed testing procedure for testing two superiority hypotheses. We compare the proposal to the Bonferroni-Holm procedure and identify a number of scenarios in which superior performance is ensured. The actual power gain is investigated through simulations. Finally, we present an available software implementation through the R-package targeted and discuss how the methods can be extended to more than two superiority hypotheses.
01-efficient-analysis-trials: 3
Analyzing multi-center randomized trials with covariate adjustment while accounting for clustering
Muluneh Alene Addis, Kelly Van Lancker, Stijn Vansteelandt
Ghent University, Belgium
Augmented inverse probability weighting (AIPW) and G-computation with canonical generalized linear models have become increasingly popular for estimating the average treatment effect (ATE) in randomized experiments. These estimators leverage outcome prediction models to adjust for imbalances in baseline covariates across treatment arms, improving statistical power compared to unadjusted analyses, while maintaining control over Type I error rates, even when the models are misspecified. Practical application of such estimators often overlooks the clustering present in multi-center clinical trials. Even when prediction models account for center effects, this omission can degrade the coverage of confidence intervals, reduce the efficiency of the estimators, and complicate the interpretation of the corresponding estimands. These issues are particularly pronounced for estimators of counterfactual means, though somewhat less severe for those of the ATE, as demonstrated through Monte Carlo simulations and supported by theoretical insights. To address these challenges, we develop efficient estimators of counterfactual means and of the ATE in a random center. These extract information from baseline covariates by relying on outcome prediction models, but remain unbiased in large samples when these models are misspecified. We also introduce an accompanying inference framework inspired by random-effects meta-analysis. Adjusting for center effects yields substantial gains in efficiency, especially when treatment effect heterogeneity across centers is large. Monte Carlo simulations and application to the WASH Benefits Bangladesh study demonstrate adequate performance of the proposed methods.
01-efficient-analysis-trials: 4
Increasing efficiency of composite endpoint trials: Novel Bayesian latent variable framework with application to late-stage trials
Paul Newcombe1, Jasna Cotic1, Aris Perperoglou1, James Wason2, Dave Lunn1
1GSK, United Kingdom; 2Newcastle University, United Kingdom
Composite responder endpoints, which combine multiple clinical outcomes to determine a binary responder variable, are commonly used in clinical trials to capture various aspects of disease progression. Traditionally these endpoints are analysed as binary, which means a large amount of information is discarded as the continuous component variables are dichotomised and collapsed together. Various methods, including a latent variable framework proposed by McMenamin et al[1], enable more efficient analysis of composite endpoints through an expanded model that includes the underlying continuous endpoint information to improve precision, while inferring treatment effects on the same composite endpoint scale. Previous applications to academic trials, and post-hoc analysis of pharmaceutical trial data, have indicated up to 60% reductions in sample size can be possible[1].
Despite clear potential to enable smaller, shorter trials, thereby decreasing costs and delivering new medicines to patients faster, there are no examples to our knowledge of this methodology being put forward with regulators, or being used to design a clinical trial within the pharmaceutical industry. Implementing a Bayesian approach could increase uptake by enabling integration into quantitative decision-making frameworks such as conditional assurance[2], which are increasingly used during earlier phases of drug development. We will describe a novel Bayesian implementation of the composite responder latent variable framework, and demonstrate that it enables power increases as much as 50% in realistic simulations based on GSK trial data.Results illustrating application of both Bayesian and Frequentist implementations to secondary analysis of several GSK phase 3 trials will also be presented, which represents the first application of the latent variable framework to large, late stage trials and indicates a substantial sample size saving could have been possible. We hope to raise awareness of this important technique with statisticians working in all stages of drug development, and prompt further methodological development.
[1] McMenamin M, Barrett JK, Berglind A, Wason JM. Employing a latent variable framework to improve efficiency in composite endpoint analysis. Stat Methods Med Res. 2021. 30(3):702–16.
[2] Temple JR, Robertson JR. Conditional assurance: the answer to the questions that should be asked within drug development. Pharm Stat. 2021. 30(6): 1102-1111
01-efficient-analysis-trials: 5
Ordinal Outcome Analysis in Neurological Trials: Current Practices and the Proportional Odds Debate
Yongxi Long1, Bart Jacobs2, Ewout Steyerberg3, Erik van Zwet1
1Leiden University Medical Center, The Netherlands; 2Erasmus Medical Center, The Netherlands; 3University Medical Center Utrecht, The Netherlands
Ordinal scales, such as the modified Rankin Scale and Glasgow Outcome Scale Extended, are widely used as outcome measures in neurological trials. In a literature review of 70 recent randomized controlled trials (RCTs) across five acute neurological conditions, we examined statistical methods used to test and estimate treatment effects from ordinal outcomes.
Dichotomization remained common in about one-third of the RCTs, with notable discrepancy in the cut-point chosen for analysis. Therefore, the information contained in the rank ordering of the outcome was not fully used. Among studies that retained the ordinal nature of the data, the proportional odds model was commonly used to quantify the treatment effect in terms of a common odds ratio and to test the null hypothesis that the treatment has no effect. However, there is a large variation in the assessment and reporting of the proportional odds assumption. This lack of clarity can lead to misinterpretation of results and suboptimal methodological choices. Concern over the validity of this assumption may lead researchers to dichotomize ordinal outcomes unnecessarily.
To address these challenges, we developed methodological guidance for ordinal outcome analysis, with a particular focus on the proportional odds assumption. We explain why the proportional odds assumption is irrelevant for hypothesis testing but crucial for summarizing treatment effects. We also demonstrate that pre-testing the proportional odds assumption can lead to inflated type I error rates. We advocate for a simple graphical check of the assumption as more informative than formal testing. If we are satisfied that there is no substantial violation of the proportional odds assumption, it is reasonable to summarize the treatment effect into a single number such as the common odds ratio.
We illustrate these considerations with three neurological trials: the ANGEL-ASPECT and the MR CLEAN trial (which investigated endovascular therapy in stroke) and the RESCUEicp trial (which investigated decompressive craniectomy in traumatic brain injury). We conclude with a statistical checklist for ordinal outcome analysis. By addressing common misconceptions and providing practical recommendations, our work aims to promote more rigorous and interpretable statistical practice in neurological trials.
|