Conference Agenda (All times are shown in Eastern Daylight Time)

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Only Sessions at Location/Venue 
Only Sessions at Date / Time 
 
 
Session Overview
Session
Paper Session 8: Get Real: Identifying Misinformation
Time:
Sunday, 16/Nov/2025:
4:00pm - 5:30pm

Location: Potomac I


Show help for 'Increase or decrease the abstract text size'
Presentations
4:00pm - 4:30pm

Emerging Adulthood and Brazilian College Students’ Experiences with Misinformation in Social Media

C. C. Gonzaga1, S. Smith Budhai2, P. Pinto3, D. Agosto4

1Universidade Federal de Minas Gerais, Brazil; 2University of Delaware, USA; 3Drexel University, USA; 4Rutgers, the State University of New Jersey, USA

This paper considers emerging adulthood (EA) as a theoretical lens for understanding how Brazilian college students manage misinformation on social media. Data were collected through focus groups with 25 undergraduate library and information science majors at Brazil’s Federal University of Minas Gerais in the weeks surrounding Brazil’s 2022 presidential election, a period marked by intensified misinformation online. Data analysis connects five key developmental factors of EA to participants’ everyday information practices and shows that political tension and family conflicts were linked to experiences with misinformation. It also shows that these students are not passive consumers of information and misinformation but active navigators of the digital information landscape, with some playing the role of information evaluation experts for family and others, especially in WhatsApp personal messaging and on social media. The authors conclude that efforts to combat misinformation must consider emerging adults’ developmental, social, and emotional dimensions, alongside technical and educational interventions.



4:30pm - 4:45pm

The Role of Self-Efficacy, Critical Thinking, and Media Literacy in Human Deepfake Video Detection

C. {. Chen, D. H.-L. Goh

Nanyang Technological University, Singapore

As deepfake videos become common online and harder to detect, understanding how people identify them is increasingly important. Although previous studies have explored how information literacy self-efficacy, critical thinking and media literacy relate to misinformation detection, few have focused specifically on deepfake videos, which are more visually deceptive and cognitively demanding to examine. While some research has examined the role of detection accuracy, less is known about how confidence interacts with individual traits during the detection process. Thus, this study examines how these three factors relate to people’s confidence and accuracy in identifying deepfake videos. Two hundred participants took part in an online study where they watched and evaluated a set of real and deepfake videos, rated their confidence, and completed a series of questionnaires. The results were analysed using PLS-SEM. Our findings show that all three traits influence detection accuracy, although new media literacy showed a negative effect. Moreover, confidence served as a mediator between new media literacy and detection accuracy. These findings suggest that helping people build stronger media-related skills and confidence may support better identification when examining manipulated video content. The study also highlights the need for investigation into the mechanisms behind miscalibrated confidence.



4:45pm - 5:15pm

Community-Driven Fact-Checking on WhatsApp: Who Fact-Checks Whom, and Why?

K. Garimella

Rutgers University, USA

This paper studies community-driven fact-checking –the members of a community fact-checking their own content– on WhatsApp, with the aim of determining its prevalence, who does it, and whether it is effective. The study leverages

two large datasets of WhatsApp group chats, encompassing both public and private group conversations with varying levels of intimacy among members. Adopting a mixed-methods approach, the research combines quantitative analysis of observational data with qualitative measures to shed light on these research questions.

The findings reveal that community-driven corrections are infrequent, and when they do occur, they are typically conveyed through polite requests aimed at alerting individuals about the presence of misinformation. However, users often exhibit apathy towards self-correction, disregard the corrections, or even feel offended by public corrections within the group. Notably, the responsibility of correcting misinformation primarily falls on active community members, with group administrators accounting for a relatively small portion (up to 20%) of the corrections. Additionally, the study uncovers significant variations in the types of corrections and responses to corrections, influenced by group norms and the degree of familiarity among group members. These observations suggest the existence of underlying dynamics of power and trust within these groups.



5:15pm - 5:30pm

Generation Zs’ Fight Against Deepfake Videos: A Survey on Identification Strategies

C. {. Chen, D. H.-L. Goh, H. Qiu, C. Neo

Nanyang Technological University, Singapore

Deepfake video detection among Generation Zs remains an understudied area despite growing concerns about their exposure to synthetic content. Previous research has typically focused on adults, emphasising important cues in detection accuracy, but little is known about what strategies Generation Zs utilise. To address this gap, we conducted a study on participants under 21 years old. Participants were shown four videos, two real and two deepfakes, after which they completed an online survey detailing the strategies they used for identification. Our findings show that visual, audio and knowledge-based cues were often used for detection. Audio cues, especially vocal and language features, were frequently used in the correct detection of real videos compared with deepfakes. In contrast, visual cues like facial cues did not show a significant difference in usage between real and deepfake videos, which were often used in incorrect detection of deepfakes.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: ASIS&T 2025
Conference Software: ConfTool Pro 2.6.154+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany