Conference Agenda (All times are shown in Eastern Daylight Time)

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Only Sessions at Location/Venue 
Only Sessions at Date / Time 
 
 
Session Overview
Session
Paper Session 13: AI in Scientific Publishing
Time:
Monday, 17/Nov/2025:
11:00am - 12:30pm

Location: Conference Theater


Show help for 'Increase or decrease the abstract text size'
Presentations
11:00am - 11:30am

AI-Augmented Search for Systematic Reviews: A Comparative Analysis

V. Vera, V. Khandelwal, K. Roy, R. Garimella, H. Surana, A. Sheth

University of South Carolina, USA

Researchers are increasingly seeking to automate systematic review workflows to reduce time and labor. However, AI-generated search strategies often contain errors that can significantly undermine the reliability of the evidence. To evaluate the relevancy and reproducibility of automated search strategies, we conducted a comparative analysis between a human-in-the-loop system with a neurosymbolic AI framework (i.e., NeuroLit Navigator) and three AI systems that primarily rely on generative language models: Scite, Consensus, and Perplexity. Results showed that through an iterative human-in-the-loop approach, NeuroLit Navigator produced more precise and focused search strategies aligned with domain-specific terminologies compared to commercial LLM-based systems, which presented issues such as lack of reproducibility and interpretability. This study highlights the potential of human-AI collaboration in systematic review workflows, suggesting that AI should augment, rather than replace, librarian expertise. Our findings contribute to the growing field of human-centered AI by providing a model for designing AI systems in information-intensive domains.



11:30am - 12:00pm

Disciplinary Diversity in Academic AI Adoption: A Comparative Analysis of AI Tool Usage Declarations Across Scientific Fields

Z. Xu

University of Oklahoma, USA

This study maps the adoption patterns of AI tools in academic writing by analyzing 7,953 AI usage declarations from journal publications. AI adoption increased from 62.6% (October 2023) to 78.2% (March 2025), approaching a projected 85% saturation level. Physical and Social Sciences show highest adoption rates, while Health and Life Sciences lag behind. ChatGPT dominates across all disciplines (67-75% of usage), with disciplinary preferences emerging: multidisciplinary research favors writing tools while Physical Sciences utilize more translation tools. Language-related functions comprise 80-90% of all usage, with discipline-specific emphasis patterns. Network analysis reveals Physical Sciences exhibit the most diverse tool ecosystem, with ChatGPT serving as the central hub across fields. This first comprehensive cross-disciplinary analysis of actual AI usage patterns contributes valuable insights for academic publishing policies and discipline-specific AI literacy development.



12:00pm - 12:30pm

Automatic Identification of Citation Distortions in Biomedical Literature: A Case Study

M. J. Sarol1, J. Schneider1,2, H. Kilicoglu1

1University of Illinois at Urbana-Champaign, USA; 2Harvard University, USA

Citations are central to the propagation of scientific information. Ensuring the accuracy of citations is essential to maintain the credibility of scientific knowledge. However, assessing citations is a significant challenge, especially at scale. This study examines the utility of natural language processing (NLP) in identifying poor citation practices. Specifically, we replicate Greenberg’s 2009 study on citation distortions in Alzheimer’s research, which demonstrated how poor citation practices can contribute to the establishment of unsubstantiated claims as facts. We explored two approaches: one that utilizes large language models (LLMs), and another that relies on existing publicly available NLP tools and publication metadata. Our findings suggest that, among Greenberg’s three types of citation distortion – citation bias, amplification, and invention – current NLP tools are most effective at detecting amplification, with more limited success in replicating Greenberg’s results with NLP for the other citation distortion types. Further refinements to LLM pipelines are needed to better capture the subtleties of citation bias and invention in biomedical publications.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ASIS&T 2025
Conference Software: ConfTool Pro 2.6.154+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany