Conference Agenda (All times are shown in Eastern Daylight Time)

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Paper Session 10: AI in Healthcare
Time:
Monday, 17/Nov/2025:
9:00am - 10:30am

Location: Conference Theater


Show help for 'Increase or decrease the abstract text size'
Presentations
9:00am - 9:30am

Can I Trust This Chatbot? Assessing User Privacy in AI-Healthcare Chatbot Applications

R. Yener1, G.-H. Chen2, E. Gumusel3, M. Bashir4

1University of Illinois Urbana-Champaign, USA; 2University of Illinois Urbana-Champaign, USA; 3Indiana University Bloomington, USA; 4University of Illinois Urbana-Champaign, USA

As Conversational Artificial Intelligence (AI) becomes more integrated into everyday life, AI-powered chatbot mobile applications are becoming increasingly adopted across industries, particularly in healthcare domain. These chatbots offer accessible and 24/7 support, yet their collection and processing of sensitive health data present critical privacy concerns. While prior research has examined chatbot security, privacy issues specific to AI healthcare chatbots have received limited attention. Our study evaluates the privacy practices of 12 widely downloaded AI healthcare chatbot apps available on the App Store and Google Play in the United States. We conducted a three-step assessment analyzing: (1) privacy settings during sign-up, (2) in-app privacy controls, and (3) the content of privacy policies. The analysis identified significant gaps in user data protection. Our findings reveal that half of the examined apps did not present a privacy policy during sign up, and only two provided an option to disable data sharing at that stage. The majority of apps’ privacy policies failed to address data protection measures. Moreover, users had minimal control over their personal data. The study provides key insights for information science researchers, developers and policymakers to improve privacy protections in AI healthcare chatbot apps.



9:30am - 9:45am

Detecting AI-Generated vs. Human-Written Health Misinformation: the Impact of eHealth Literacy on Accuracy and Sharing

Y. Xie, P. Zhang

Peking University, People's Republic of China

The widespread of AI-generated health misinformation poses significant challenges to public health. This study aims to investigate the influence of eHealth literacy, the ability to seek, comprehend, and appraise health information from digital sources, on individuals’ ability to detect AI-generated (as opposed to human-written) health misinformation and their subsequent sharing behaviors. We conducted an online experiment of 627 participants in which they were presented with 12 messages to detect both AI-generated and human-written misinformation. Results show that: 1) AI-generated information is often perceived as more convincing, regardless of whether the information is true or false. 2) Higher eHealth literacy paradoxically correlates with poorer detection accuracy, especially among younger, healthier participants, exposing a concerning self-assessment gap. 3) Participants with lower accuracy in detecting misinformation are more likely to share health information and less likely to correct it when they are aware of its inaccuracy, thus spreading misinformation further. These findings highlight a disparity and challenge to use self-perceived eHealth literacy as indicators of actual ability of handling health misinformation. Targeted interventions to enhance digital health literacy, particularly among younger and healthier populations, are urgently needed.