Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Improving replicability in clinical biostatistics
Time:
Wednesday, 27/Aug/2025:
2:00pm - 3:30pm

Location: Biozentrum U1.111

Biozentrum, 302 seats

Show help for 'Increase or decrease the abstract text size'
Presentations
inv-improving-replicability: 1

Questionable Research Practices - From Small Errors to Research Misconduct

Leonhard Held

University of Zurich, Switzerland

The pressure to 'publish or perish' increases the chances that researchers report results selectively, apply data dredging, or even try to cheat the system. It is helpful to consider such Questionable Research Practices (QRPs) as a spectrum of behaviours, ranging from honest errors and mistakes at one end, through to misconduct and fraud at the other. I will give some recent examples of the spectrum of QRPs from the biomedical literature. As the number of research paper retractions currently on the rise, we can no longer dismiss QRPs as isolated problems of a small number of people behaving sloppily or dishonestly. Instead, every researcher and every statistician may at times engage in QRPs and hence should be aware of the various forms in their and others' research. Addressing QRPs should be a central part of our identity as biostatisticians to facilitate rigorous, transparent, and reproducible research practices.



inv-improving-replicability: 2

Improving the replicability of applied and methodological research

Sabine Hoffmann

Ludwig-Maximilians-University Munich, Germany

In recent years, there has been increasing awareness that result-dependent selective reporting among a multiplicity of possible analysis strategies leads to unreplicable research findings. While the use of better statistical methods is one of the solutions that is often suggested to improve the replicability of research findings, there is also evidence that methodological research is itself not immune to incentives encouraging result-dependent selective reporting. This talk will introduce different topics that are relevant for the replicability of research findings in applied and methodological research and give an overview of potential solutions to improve the replicability of research findings, ranging from pre-registration, registered reports and blind analysis to multiverse style analyses, multi-analyst studies and neutral simulation studies.



inv-improving-replicability: 3

Method benchmarking in computational biology - current state and future perspectives

Charlotte Soneson

Friedrich Miescher Institute for Biomedical Research (Switzerland), SIB Swiss Institute of Bioinformatics (Switzerland)

Researchers, regardless of discipline, are often faced with a choice between multiple computational methods when performing data analyses. Method benchmarking aims to rigorously compare the performance of different methods, typically using ground truth derived from well-characterized reference datasets, in order to determine the strengths and weaknesses of each method or to provide recommendations regarding suitable choices of methods for a specific analysis task. In this talk I will discuss the current state of benchmarking in computational biology, using examples from the field of single-cell data analysis. I will discuss challenges, as well as ideas for making benchmarking more reproducible, extensible and continuous.



inv-improving-replicability: 4

Software sensitivity analysis in medical and methodological research

Tim P. Morris

UCL, United Kingdom

Principled sensitivity analysis helps researchers assess the sensitivity of our inference to assumptions. Assumptions which spring to mind may be normality or independence, which are inherent to a particular statistical method. However, a software implementation may make further ‘assumptions’ through its default settings. High quality statistical software gives users control over options/arguments but makes defensible default choices otherwise. It is not unusual to have more than one defensible choice, making defaults somewhat arbitrary. The collection of default choices used by different software implementations may in aggregate lead to software sensitivity for a given method. I will present some collected examples of such ‘software sensitivity’ in medical and methodological research, and argue – to myself as much as others – for us to consider it more routinely.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ISCB46
Conference Software: ConfTool Pro 2.6.154+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany