Conference Agenda (All times are shown in Mountain Daylight Time (MDT) unless otherwise noted)

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Paper Session 20: User Perceptions in Learning and Discovery
Time:
Tuesday, 29/Oct/2024:
9:00am - 10:30am

Session Chair: Le Yang, University of Oregon, USA
Location: Imperial Ballroom 3, Third Floor


Show help for 'Increase or decrease the abstract text size'
Presentations
9:00am - 9:15am
ID: 361 / PS-20: 1
Short Papers
Confirmation 1: I/we acknowledge that all session authors/presenters have read and agreed to the ASIS&T Annual Meeting Policies
Topics: Information Science Education; Information; Learning (curriculum design; instructional resources and methods; educational program planning & technologies; e-learning; m-learning; learning analytics; knowledge co-construction, searching as learning)
Keywords: Artificial intelligence, generative AI, LIS education, LIS practice, information literacy

Student Perceptions of Generative AI in LIS Coursework

Priya Kizhakkethil, Carol Perryman

Texas Woman's University, USA

The purpose of the study is to inform LIS curriculum development by understanding student perceptions of generative AI tools. Assignments with a generative AI component for two courses (n=65) were de-identified and analyzed after the end of the semester using a grounded qualitative approach. Students recognize the need for caution and critical evaluation of generative AI tool output, while mentioning areas of utility in practice, including information retrieval, reference services, teaching, and information access. Responses highlight the importance of information literacy in the use of these tools, and potential implications for practice in information professions. The study contributes to the literature about the need for curriculum development to address this disruptive technology by identifying the professional competencies that are affected.



9:15am - 9:30am
ID: 212 / PS-20: 2
Short Papers
Confirmation 1: I/we acknowledge that all session authors/presenters have read and agreed to the ASIS&T Annual Meeting Policies
Topics: Human-Computer Interaction (usability and user experience; human-technology interaction; human-AI interaction; user-centered design)
Keywords: Nudge, procrastination, nudge acceptability, learning behavior, digital learning environment

Acceptability of Nudge in Digital Learning Environment

Kok Khiang Lim, Chei Sian Lee

Nanyang Technological University, Singapore

Digital nudging is gaining traction in the educational domain to guide students’ decision-making processes and achieve desirable learning outcomes through subtle changes in the digital learning environment. From the information science perspective, these changes are realized through informational cues and human-computer interface design to affect the influences. Studies have shown nudge effectiveness in influencing students’ behaviors, but the extent to which digital nudging affects them and who is susceptible to nudges remains unclear. Although progress has been made in understanding nudge acceptability, research in the context of learning and students’ characteristics, such as procrastination behavior, remains limited. To fill this gap, this study surveyed 305 university students to assess their nudge acceptability on two types of nudges: System 1, which involves automatic and intuitive processes, and System 2, which engages deliberate and reflective thinking. The results show that students, regardless of their procrastination tendencies, were receptive to nudges in supporting their learning. System 1 nudge was preferred due to its simplistic and straightforward intervention approach. The insights gained from this study contributed to the advancement of nudge research by demonstrating that students with various procrastination tendencies were receptive to nudging and guiding researchers in designing tailored nudges to maximize effectiveness.



9:30am - 10:00am
ID: 192 / PS-20: 3
Long Papers
Confirmation 1: I/we acknowledge that all session authors/presenters have read and agreed to the ASIS&T Annual Meeting Policies
Topics: Human-Computer Interaction (usability and user experience; human-technology interaction; human-AI interaction; user-centered design)
Keywords: AI-assisted, academic reading, reading experience

Exploration of the Effectiveness and Experience of AI-Assisted Academic Reading (Best Long Paper Award)

Xiaochuan Zheng, Hao Fan

Wuhan University, People's Republic of China

AI-assisted reading tool has rapidly gained popularity because it can simplify and improve the process of comprehending research papers. This study examines their actual impact on students’ academic reading effectiveness and experience. Two groups of participants were recruited for quasi-experiment and use Mann-Whitney non-parametric test to analyze their academic reading effectiveness and experience. Content analysis is employed to extract and analyze prompts posed by participants with AI-assisted reading tool. Results show that positive impacts are noted on retelling of “Results”, “Conclusion”, and “Critical thinking”. However, negative effects emerge in “Background & purpose”, “Methodology”, and “Detail”. User experiences reveal “Concentration” challenges but positive perceptions of “Time fly”, “Control”, and “Feel joyful”. Students tend to pose self-generated prompts rather than recommend prompts. AI-assisted reading tools offer overall benefits but necessitate recognizing negative impacts. Students are encouraged to enhance their digital literacy to use AI-assisted reading tools. Additionally, optimizing tool functions is essential for sustainable development.



10:00am - 10:15am
ID: 436 / PS-20: 4
Short Papers
Confirmation 1: I/we acknowledge that all session authors/presenters have read and agreed to the ASIS&T Annual Meeting Policies
Topics: Human-Computer Interaction (usability and user experience; human-technology interaction; human-AI interaction; user-centered design)
Keywords: Reading, PIRLS, GenAI, question creation, question assessment.

Comparative Study of GenAI (ChatGPT) vs. Human in Generating Multiple Choice Questions Based on the PIRLS Reading Assessment Framework

Yu Yan Lam, Samuel Kai Wah Chu, Elsie Ong, Winnie Suen, Lingran Xu, Chin Lui Lavender Lam, Man Yu Wong

The Hong Kong Metropolitan University, Hong Kong S.A.R. (China)

Human-generated multiple-choice questions (MCQs) are commonly used to ensure objective evaluation in education. However, generating high-quality questions is difficult and time-consuming. Generative artificial intelligence (GenAI) has emerged as an automated approach for question generation, but challenges remain in terms of biases and diversity in training data. This study aims to compare the quality of GenAI-generated MCQs with humans-created ones. In Part 1 of this study, 16 MCQs were created by humans and GenAI individually with alignment to the Progress in International Reading Literacy Study (PIRLS) assessment framework. In Part 2, the quality of MCQs generated was assessed based on the clarity, appropriateness, suitability, and alignment to PIRLS by four assessors. Wilcoxon rank sum tests were conducted to compare GenAI versus humans generated MCQs. The findings highlight GenAI's potential as it was difficult to differentiate from human created questions and offer recommendations for integrating AI technology for the future.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ASIS&T 2024
Conference Software: ConfTool Pro 2.6.153+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany