Conference Agenda (All times are shown in Eastern Daylight Time)

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Only Sessions at Location/Venue 
Only Sessions at Date / Time 
 
 
Session Overview
Session
Paper Session 26: Prompting Generative AI
Time:
Tuesday, 18/Nov/2025:
11:00am - 12:30pm

Location: Potomac II


Show help for 'Increase or decrease the abstract text size'
Presentations
11:00am - 11:30am

Enhancing Critical Thinking in Generative AI Search with Metacognitive Prompts

A. Singh, Z. Guan, S. Y. Rieh

The University of Texas at Austin, USA

The growing use of Generative AI (GenAI) conversational search tools has raised concerns about their effects on people’s metacognitive engagement, critical thinking, and learning. As people increasingly rely on GenAI to perform tasks such as analyzing and applying information, they may become less actively engaged in thinking and learning. This study examines whether metacognitive prompts—designed to encourage people to pause, reflect, assess their understanding, and consider multiple perspectives—can support critical thinking during GenAI-based search. We conducted a user study (N=40) with university students to investigate the impact of metacognitive prompts on their thought processes and search behaviors while searching with a GenAI tool. We found that these prompts led to more active engagement, leading students to explore a broader range of topics and engage in deeper inquiry through follow-up queries. Students reported that the prompts were especially helpful for considering overlooked perspectives, promoting evaluation of AI responses, and identifying key takeaways. Additionally, the effectiveness of these prompts was influenced by students’ metacognitive flexibility. Our findings highlight the potential of metacognitive prompts to foster critical thinking and provide insights for designing and implementing metacognitive support in human-AI interactions.



11:30am - 12:00pm

“Sorry, I Cannot Fulfill That Request”: Analyzing Large Language Model Responses, Redirections, and Refusals to Polarized News Topics

H. Triem, R. E. Boyle

The University of Texas at Austin, USA

We are reaching an era where the public increasingly relies on large language models (LLMs) for information on current events. Existing research on the subject asks LLMs to take a political stance through survey questionnaires, persona adoption, or multiple-choice prompting. The following research examines the implicit political lean of LLMs when responding in natural language to queries on 77 topics that were of public interest from 2017 to 2021. Four LLMs were prompted using two natural prompting styles, resulting in 808 unique responses. Responses were human annotated to identify topics that LLMs redirected or outright refused to answer, and were classified via a neural network as conservative, moderate, or liberal. Further, LLM responses were analyzed on paradigms of whether topics were polarized, international, or asked using non-neutral language. Findings suggest that these LLMs lean moderate to liberal, erroneously refuse neutral topics, and are inconsistent in answers to the same prompts. These findings illustrate the risk of relying on generative AI for answers in an increasingly polarized environment and call for information professionals to examine and discuss implicit misinformation in the age of AI.



12:00pm - 12:15pm

Understanding User Prompting Behavior in Generative AI: A Component Analysis

Z. Jin, G. Meng, X. Wang, J. Wang, C. Liu, J. Zhang

Department of Information Management, Peking University, People's Republic of China

Generative AI (GenAI), as exemplified by ChatGPT, is transforming the way people seek information and interact with information systems and resources. This study investigates users’ prompt formulation behavior through a longitudinal observation of experienced ChatGPT users. Extending prior research on prompt engineering, this study introduces the IIOCR framework, delineating five core components: input, instruction, output, context, and relation. The findings reveal that users have a strong preference for simple prompt, with single-component prompts accounting for 43.8% of all prompts. Dual-component combinations constitute 38.2%, with Input + Instruction (20.2%) being the most frequent pattern. Only 18.0% of prompts involve multi-component combinations, indicating that complex prompt formulations are infrequent in typical user interactions. The IIOCR framework reveals users’ preference for simplicity and directness. It also offers practical insights for user-centered AI design by emphasizing the instruction, input, and output components that address users’ core needs.



12:15pm - 12:30pm

What Makes a Good Prompter? Insights into Prompt Literacy across Mind, Experience, and Culture

B. Jia1, Y. Pu1, Y. Liu2

1Peking University, People's Republic of China; 2University of Washington, USA

In the era of generative artificial intelligence (AI), the skill of effectively crafting prompts referred to as prompt literacy—has become increasingly vital. Despite its significance, there remains a paucity of research delineating the attributes that constitute a proficient prompter. This study introduces a comprehensive, multidimensional framework to assess prompt literacy, encompassing cognitive, experiential, and sociocultural dimensions. To empirically investigate this framework, we conducted a mixed-methods experiment involving 60 participants aged 18 to 35. Throughout these sessions, participants' eye movements were meticulously recorded using Tobii Spark eye-tracking technology. Complementing this quantitative data, we employed think-aloud protocols to gain insights into participants' cognitive strategies during the prompting process. Post-task, participants completed detailed questionnaires assessing their demographics, AI usage habits, emotional responses, and perceived task difficulty. Semi-structured interviews were also conducted to delve deeper into their prompting strategies and reflections. Our analysis revealed distinct prompting typologies, each characterized by unique behavioral signatures and eye-tracking markers. These findings offer nuanced insights into the competencies that underpin effective prompting in generative AI contexts.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ASIS&T 2025
Conference Software: ConfTool Pro 2.6.154+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany