Conference Agenda (All times are shown in Eastern Daylight Time)
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Epistemological Beliefs as Predictors of Generative AI Familiarity, Perceived Issues Likelihood, and Usage
S.-C.J. Sin
Nanyang Technological University, Singapore
With the rising popularity and ethical concerns about generative artificial intelligence (GAI), there are strong interests in understanding the factors behind its perception and use. Epistemological beliefs, found pertinent to information behavior, are rarely studied in GAI usage research. This study conducted path analysis on survey responses from 322 U.S. adults to explore how epistemological beliefs (from Schommer’s Epistemological Questionnaire) relate to GAI perception (familiarity and perceived likelihood of GAI issues) and use (frequency and seriousness). The study found direct positive paths from Avoid Ambiguity beliefs to perceived GAI issues likelihood and usage frequency, and indirect positive paths from Depend on Authority to frequency and seriousness of use via familiarity with GAI. Theoretical implications and practical implications for information literacy are discussed.
11:15am - 11:45am
“Let’s ask Meta AI!”: Information Seeking Practices with Meta AI on WhatsApp
K. A. K. Adavi, A. Acker
University of Texas at Austin, USA
In 2023 Meta launched generative AI (GenAI) features called “Meta AI” for general search, and text-to-image creation in its family of apps including the superapp, WhatsApp. Currently, we do not have empirical research on how people are using these GenAI features in WhatsApp. To understand current information practices with Meta AI, we conducted an interview and task-based study with 26 Indian students at a large public university in the United States. We find that information seeking, planning activities, and image creation are the largest use cases of Meta AI. Participants described relying on external URL links for fact checking and planning tasks as markers of credibility for their search tasks. We argue that Meta’s platform partnerships are likely to influence the kinds of search results participants receive and rely on. Our key contribution to information science is urging researchers to expand sites of studying personal information practice with the adoption of GenAI features based searching activities and consider searching activities that occur within superapps like WhatsApp.
11:45am - 12:00pm
From Open‑Ended Text to Taxonomy: An LLM‑Based Framework for Information Sources for Disability Services
J. H.-P. Hsu, M. Lee
George Mason University, USA
People with disabilities (PWD) and their family members often find it difficult to find information about available services. One of the approaches to address this information access problem is by understanding the ecology of available information sources. However, identifying the landscape of information sources is challenging due to the variety of sources and their varying visibility. This study proposes a computational approach to processing open-ended survey answers by constructing a hierarchical taxonomy of information sources. We developed a semi-automated, LLM-based framework to build a taxonomy of information sources from open-ended survey answers. The resulting 3-tier taxonomy captures broad categories and fine-grained entities, supporting multi-level analysis of information sources. This work explores the feasibility of LLM-based taxonomy building and offers a scalable framework for processing open-ended texts.
12:00pm - 12:15pm
Learning with Generative AI: Evaluating Acceptability of Fact-Checking Digital Nudges
C. S. Lee, T. M. C. Nguyen
Nanyang Technological University, Singapore
This study examines learners' acceptability of digital fact-checking nudges from the dual process theory by evaluating learners’ perceptions towards two types of digital fact-checking nudges comprising heuristic processing (System 1) versus systematic processing (System 2) and the impact of learners’ profiles on the perception of digital fact-checking nudges. The study surveyed and analyzed GenAI usage behaviors and perception towards GenAI digital nudges for fact-checking of 300 students in higher education. Overall, results indicate that learners perceived System 1 nudges to be more effective. While participants’ academic discipline did not significantly affect their acceptability and effectiveness perception towards both nudge types, their Gen AI usage frequency had a significant impact on nudge perception. Avid Gen AI learners have a more positive perception towards nudges, especially System 2 nudges. In terms of theoretical contributions, the study addresses the gap in cognitive processing research for nudge design for learning and fact-checking Gen AI responses. As for practical contributions, the study offers insights for designing effective fact-checking nudges depending on the level of usage and familiarity with GenAI.