Conference Agenda (All times are shown in Eastern Daylight Time)

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Only Sessions at Location/Venue 
Only Sessions at Date / Time 
 
 
Session Overview
Session
Virtual Paper Session 15: Scholarly Publishing 2
Time:
Friday, 12/Dec/2025:
4:00pm - 5:00pm

Virtual location: Virtual


Show help for 'Increase or decrease the abstract text size'
Presentations
4:00pm - 4:30pm

“It’s like some weird AI ouroboros”: Artificial Intelligence Use and Avoidance in Scholarly Peer Review

A. H. Poole1, A. Todd-Diaz2

1Drexel University, USA; 2Towson University, USA

Peer review constitutes a fundamental part of the global system of scholarly communication. Generative Artificial Intelligence (GenAI) poses an existential challenge to this system. is the first empirical study to scrutinize the intersection of AI and peer review from the perspective of information and library scientists. It is also the first to discuss core information practices, namely use and avoidance, not only in the context of peer review, but in the context of AI more broadly. Our survey participants addressed their personal use or avoidance of AI, their overall stance on AI use or avoidance, detecting and sanctioning illicit AI use, starting to use or continuing to avoid AI, developing an AI use policy, and what they perceived as the future (both predicted and hoped-for) of AI. Most respondents underscored the indubitably human-centered nature of the peer review process. They gave their imprimatur only to the most limited uses of AI, e.g. for activities such as checking grammar and style. Their AI avoidance took root in deeply felt moral and ethical commitments as well as more prosaic concerns about bias and quality. We discuss the implications of these findings for research and practice.



4:30pm - 5:00pm

Mapping the Landscape, Measuring the Gap: Qualitative Methods Reporting in Information Science Research

R. D. Frank1,2, A. Kriesberg3

1University of Michigan, USA; 2Einstein Center Digital Future, Germany; 3Simmons University, USA

We examined qualitative methods reporting in information science research by analyzing ASIS&T conference papers (2018-2022) and comparing findings with journal publishing guidelines. Our study of 117 papers using exclusively qualitative methods revealed significant gaps in methodological documentation. While 78.6% of papers involved human subjects research (primarily interviews), only 28.3% mentioned IRB approval. Similarly, 66.7% failed to describe analytical tools used. Journal publishing guidelines across the field showed inconsistent requirements for qualitative research reporting, with some mandating IRB disclosure while others provided minimal direction. The prevalent use of passive voice in methods sections often obscured critical information about data producers and collection processes. These findings demonstrate a need for more standardized reporting guidelines for qualitative research in information science. We recommend that ASIS&T publishing venues require authors to provide, at minimum: data production year(s), clear identification of data producers, persistent identifiers when available, and IRB approval status for human subjects research. These measures could enhance transparency and facilitate better understanding of qualitative research practices in the field.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ASIS&T 2025
Conference Software: ConfTool Pro 2.6.154+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany