Conference Agenda (All times are shown in Eastern Daylight Time)

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Only Sessions at Location/Venue 
Only Sessions at Date / Time 
 
 
Session Overview
Session
Paper Session 10: AI in Healthcare
Time:
Monday, 17/Nov/2025:
9:00am - 10:30am

Location: Conference Theater


Show help for 'Increase or decrease the abstract text size'
Presentations
9:00am - 9:30am

Can I Trust This Chatbot? Assessing User Privacy in AI-Healthcare Chatbot Applications

R. Yener1, G.-H. Chen2, E. Gumusel3, M. Bashir4

1University of Illinois Urbana-Champaign, USA; 2University of Illinois Urbana-Champaign, USA; 3Indiana University Bloomington, USA; 4University of Illinois Urbana-Champaign, USA

As Conversational Artificial Intelligence (AI) becomes more integrated into everyday life, AI-powered chatbot mobile applications are becoming increasingly adopted across industries, particularly in healthcare domain. These chatbots offer accessible and 24/7 support, yet their collection and processing of sensitive health data present critical privacy concerns. While prior research has examined chatbot security, privacy issues specific to AI healthcare chatbots have received limited attention. Our study evaluates the privacy practices of 12 widely downloaded AI healthcare chatbot apps available on the App Store and Google Play in the United States. We conducted a three-step assessment analyzing: (1) privacy settings during sign-up, (2) in-app privacy controls, and (3) the content of privacy policies. The analysis identified significant gaps in user data protection. Our findings reveal that half of the examined apps did not present a privacy policy during sign up, and only two provided an option to disable data sharing at that stage. The majority of apps’ privacy policies failed to address data protection measures. Moreover, users had minimal control over their personal data. The study provides key insights for information science researchers, developers and policymakers to improve privacy protections in AI healthcare chatbot apps.



9:30am - 9:45am

Detecting AI-Generated vs. Human-Written Health Misinformation: the Impact of eHealth Literacy on Accuracy and Sharing

Y. Xie, P. Zhang

Peking University, People's Republic of China

The widespread of AI-generated health misinformation poses significant challenges to public health. This study aims to investigate the influence of eHealth literacy, the ability to seek, comprehend, and appraise health information from digital sources, on individuals’ ability to detect AI-generated (as opposed to human-written) health misinformation and their subsequent sharing behaviors. We conducted an online experiment of 627 participants in which they were presented with 12 messages to detect both AI-generated and human-written misinformation. Results show that: 1) AI-generated information is often perceived as more convincing, regardless of whether the information is true or false. 2) Higher eHealth literacy paradoxically correlates with poorer detection accuracy, especially among younger, healthier participants, exposing a concerning self-assessment gap. 3) Participants with lower accuracy in detecting misinformation are more likely to share health information and less likely to correct it when they are aware of its inaccuracy, thus spreading misinformation further. These findings highlight a disparity and challenge to use self-perceived eHealth literacy as indicators of actual ability of handling health misinformation. Targeted interventions to enhance digital health literacy, particularly among younger and healthier populations, are urgently needed.



9:45am - 10:15am

Impact of Cyberchondria on Unverified Health Information Sharing: A Moderated Mediation Approach

Q. Xiao, H. Zheng, J. Xu

School of Information Management, Wuhan University, People's Republic of China

People often share health information without adequate verification, which contributes to the growing spread of health misinformation on digital platforms. While previous studies have explored different cognitive and psychological factors underlying such unverified sharing, limited attention has been given to cyberchondria, a pattern of excessive and anxiety-driven online health information seeking. Grounded in the Stressor-Strain-Outcome (SSO) framework, this study proposed a mediated moderation model to link cyberchondria to unverified health information sharing. Utilizing data from a three-wave panel survey conducted in China, the results demonstrate that cyberchondria is positively associated with unverified health information sharing, and this association is partially mediated by information overload. Furthermore, the indirect relationship appears stronger among individuals with higher beliefs in the reliability of their information sources, while it is not statistically significant among those with lower beliefs. These findings highlight the importance of understanding cyberchondria not only as an individual mental health concern but also as a pathological information behavior that contributes to the broader dynamics of misinformation spread in digital health environments.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ASIS&T 2025
Conference Software: ConfTool Pro 2.6.154+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany