Conference Agenda (All times are shown in Eastern Daylight Time)
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
Session | ||
Virtual Paper Session 15: Scholarly Publishing 2
| ||
Presentations | ||
4:00pm - 4:30pm
“It’s like some weird AI ouroboros”: Artificial Intelligence Use and Avoidance in Scholarly Peer Review 1Drexel University, USA; 2Towson University, USA Peer review constitutes a fundamental part of the global system of scholarly communication. Generative Artificial Intelligence (GenAI) poses an existential challenge to this system. is the first empirical study to scrutinize the intersection of AI and peer review from the perspective of information and library scientists. It is also the first to discuss core information practices, namely use and avoidance, not only in the context of peer review, but in the context of AI more broadly. Our survey participants addressed their personal use or avoidance of AI, their overall stance on AI use or avoidance, detecting and sanctioning illicit AI use, starting to use or continuing to avoid AI, developing an AI use policy, and what they perceived as the future (both predicted and hoped-for) of AI. Most respondents underscored the indubitably human-centered nature of the peer review process. They gave their imprimatur only to the most limited uses of AI, e.g. for activities such as checking grammar and style. Their AI avoidance took root in deeply felt moral and ethical commitments as well as more prosaic concerns about bias and quality. We discuss the implications of these findings for research and practice. 4:30pm - 4:45pm
“Are We Still in Control?”: Exploring Patterns of AI Dependency in Scientific Research Wuhan University, People's Republic of China The growing use of artificial intelligence (AI) in scientific research has raised concerns about “AI dependency”, a phenomenon that remains conceptually ambiguous and underexplored. Guided by self-regulation theory, this study proposes a four-quadrant typology of AI dependency based on goal orientation and self-efficacy. Semi-structured interviews with 20 researchers revealed four distinct patterns: collaborative active, instrumental active, passive compensatory, and passive pathway. Researchers with high goal value and high self-efficacy (collaborative active) treat AI as a knowledge collaborator while maintaining autonomy. Those with high self-efficacy but low goal value (instrumental active) prioritize efficiency and treated AI as a pragmatic tool. In contrast, those with high goals but low self-efficacy (passive compensatory) relied on AI to compensate for skill gaps, while individuals low in both dimensions (passive pathway) exhibited habitual dependence and emotional distress when AI was unavailable. These findings reveal the complex psychological and behavioral dynamics underlying AI dependency, offering a more nuanced conceptual understanding and informing interventions that promote critical, self-regulated AI use. |