4:00pm - 4:30pmMining Collective Intelligence and Predicting Disruptive Paradigm Shifts via Human-Aware AI
A. J. Yang1, Y. Shi1, S. X. Zhao2,3, Y. Zhang1, S. Deng1
1School of Information Management, Nanjing University, Nanjing, 210023, China.; 2Institute of Big Data (IBD), Fudan University, Shanghai, 200433, China.; 3National Institute of Intelligent Evaluation and Governance, Fudan University, Shanghai, 200433, China.
Scientific progress hinges on the interplay between collective intelligence and transformative paradigm shifts, yet predicting these revolutionary events remains a persistent challenge. This study introduces a novel human-aware AI framework that integrates the evolution of knowledge structures with the social dynamics of scientific communities to forecast groundbreaking innovations. Leveraging graph convolutional neural networks (GCNNs), we construct a hybrid higher-order network that unifies a domain knowledge graph—derived from millions of scientific publications—with a scientist collaboration-competition space, capturing both cooperative and competitive interactions among researchers. This approach quantifies collective intelligence by generating embeddings that reflect the intricate relationships between knowledge content and human agency. By analyzing thematic knowledge distances and social proximities within this integrated network, we identify pairs of scientific domains poised for disruptive convergence. Dynamic analysis of these embeddings further enables temporally precise predictions of paradigm shifts. Applied to the life sciences, our framework successfully aligns with historical milestones, such as Nobel Prize-winning discoveries, demonstrating its predictive power. This work offers a scalable, interpretable tool for anticipating scientific revolutions, bridging the gap between knowledge evolution and social dynamics, and providing actionable insights for fostering innovation across disciplines.
4:30pm - 5:00pmBridge or Blindspot? A Visual Analysis of Representation and Narrative in Cybersecurity Across Expertise Groups
Y.-W. Huang1, Y. Lin1, S.-Y. Lin1, W. Jeng1,2
1National Taiwan University, Taiwan; 2National Institute of Cyber Security, Taiwan
This study conducts a visual analysis of 491 participant-generated drawings to examine how individuals with varying cybersecurity expertise conceptualize security through visual metaphors and narrative strategies. Using a codebook grounded in visual rhetoric theory, we identified a striking consistency in imagery—particularly the recurring use of locks, shields, and oppositional symbols—across all groups. Findings suggest the existence of shared visual ontologies shaped by both common sense and collective stereotypes. We discuss the implications of these visual conventions for cybersecurity communication and propose future research integrating generative AI tools, audience analysis, and domain knowledge frameworks to uncover underrepresented conceptual gaps.
5:00pm - 5:30pmHuman-Agent Teaming on Intelligence Tasks (HATIT): A Testbed for Evaluating AI in Intelligence Analysis
S. Paletz1, A. Kane2, M. Diep3, T. Nelson1, A. Porter1,3, S. Vahlkamp1
1University of Maryland, College Park, USA; 2Duquesne University; 3Fraunhofer USA Center Mid-Atlantic
Artificial intelligence (AI) has been proposed to overcome distributed team cognition and information challenges (e.g., volume, velocity) in intelligence analysis. However, before deploying AI in the workplace, designers should evaluate the effects of proposed AI in realistic simulated environments, also known as testbeds. We conducted interviews with intelligence professionals which, combined with our research needs, resulted in requirements, designs, and the creation of the Human-Agent Teaming on Intelligence Tasks (HATIT) testbed. HATIT includes a web-based software platform, a shift handover intelligence task in a fictional world with 427 pages and 60 documents, and an initial, static AI agent called “Illuminate” that summarizes documents and provides social media topic models. HATIT enables controlled experiments in a realistic simulation. This testbed lets us evaluate the effects of different AI on perceptions (e.g., trust, workload), problem solving and team cognition, and information search before deploying in actual work settings with real consequences.
|