Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
| Session | ||
4.01. Reimagining Records in the Age of AI
| ||
| Presentations | ||
Quand l'intelligence artificielle sème le doute : la prolifération des faux documents face aux enjeux de gestion électronique des documents Communauté Economique des Etats de l'Afrique Centrale (CEEAC), Gabon Short Description L’IA, outil puissant mais risqué, révolutionne la gestion des documents tout en favorisant la prolifération de "fake documents". Ces derniers menacent la valeur probante des archives numériques. Pour y faire face, il faut combiner technologies avancées, cadres réglementaires et formation des professionnels. Archivistes et gestionnaires, en première ligne, doivent adopter une posture proactive, intégrer des compétences technologiques et renforcer la confiance dans les systèmes numériques. Abstract Notre exposé explore l'impact de l'intelligence artificielle (IA) sur la gestion électronique des documents (GED), mettant en lumière les défis posés par la prolifération des faux documents. Ces derniers, générés grâce à des technologies comme le deepfake et les modèles génératifs, remettent en question l'authenticité et l'intégrité des archives numériques. La montée du télétravail, la disponibilité des outils d'IA et l'absence de normes solides exacerbent ce problème. Les impacts incluent une baisse de confiance dans les systèmes GED, des fraudes accrues, des litiges juridiques et une surcharge pour les professionnels de l'information. Des solutions techniques, telles que la blockchain, l'IA pour la détection des anomalies, et des signatures numériques, sont proposées. Les formations continues, la coopération internationale et des cadres juridiques renforcés sont également essentiels. Les professionnels de l'information doivent adopter une approche proactive, intégrant des technologies de pointe, tout en sensibilisant les utilisateurs et en collaborant à l'échelle mondiale pour préserver la crédibilité des archives numériques face à cette menace croissante. Inclusive Metadata: Can AI Think Like an Archivist? The Luster Company, United States of America Short Description This paper explores culturally competent and racial-conscious archival practices in the African diaspora, focusing on ethical metadata creation and equitable representation. To complement archivists' efforts, it incorporates experiments with AI tools like CustomGPT, which are trained to revise metadata using inclusive guidelines. The paper ties these approaches to digital access, colonial records, and fostering a global dialogue on creating more inclusive archives. Abstract The proposed paper introduces strategies for developing culturally competent and racially conscious archival practices, prioritizing ethical representation of Black diasporic stories. Since 2018, Luster has explored the role of digital innovation and artificial intelligence (AI) in enhancing metadata creation and promoting equitable access first as a member of the Data as Collections initiative, then as an independent archival consulting working with Black collections. By combining traditional archival methodologies with advanced digital tools, the presentation reflects on how archives can preserve and share diverse histories across borders. The presentation will explore three key themes: 1. Managing Memories: How archives can ethically preserve (without exploiting) displaced and underrepresented Black histories, including oral traditions, folklore, and religious practices unique to Black diasporic communities. 2. Records of Rights: Strategies for equitable information management with a focus on addressing gaps in colonial records and modern Black family stories. This includes navigating ethical challenges in working with colonial archives, particularly within European contexts such as Spain. 3. Digital Accessibility and AI Integration: Highlighting case studies using CustomGPTs to revise metadata using inclusive guidelines. These experiments reveal the potential for AI to support archivists in improving metadata practices while addressing limitations and the need for human oversight. This paper builds on principles outlined in the TEDx talk, “Archives Have the Power to Boost Marginalized Voices.” It underscores the responsibility of archivists to consciously include underrepresented voices in archival records and to use technology to amplify these narratives. The case study on CustomGPT illustrates how AI can assist in creating more inclusive metadata, ensuring that archives represent the full complexity of Black diasporic life. Critical questions include: • How can AI tools enhance ethical and inclusive metadata practices without replacing human expertise? • What are the ethical considerations for archivists using AI to work with sensitive and historically marginalized records? • How can traditional archival practices and digital innovations work together to create equitable and accessible archives for marginalized communities? By integrating AI with culturally conscious practices, this paper offers strategies for ethical, inclusive archives. It aligns with ICA themes, fostering dialogue on leveraging technology. Challenging Records Continuum Model Tampere University, Finland Short Description The Records Continuum Model (RCM) is currently the favored conceptualization of the digital record life span because it allows for the consideration of archival needs right from the point of record creation. However, the RCM contrasts with long-term preservation models, which, along with the datafication of collections, challenge us to rethink records in new ways. When AI introduces new uses for data in the custody of archives, the perspective provided by the RCM seems too narrow and fixed. Abstract In the Life Cycle Model, the record life span is conceptualized as a set of phases. According to this model, over time, user groups change, and the custody and responsibility for records shift from records managers to archivists. This division into two professional groups is especially problematic in a digital environment where archival requirements must be considered from the beginning. The Records Continuum Model addresses this problem. In the Records Continuum Model, records may simultaneously serve both societal and organizational needs. Consequently, it is not possible to divide the record life span into distinct spheres that are exclusively managed by either records managers or archivists. Thus, the Records Continuum Model addresses the internal division within the records profession. It allows for a unified perspective where all records professionals share a common cause and work to guarantee the authenticity, integrity, reliability, and usability of records. The underlying assumption is that these characteristics are crucial for all record users. Records are implicitly seen as a static resource that, as much as possible, remains unaltered to serve the needs of all users regardless of the moment in the record’s life span. When we look at digital long-term preservation, we encounter an entirely different conceptualization. Long-term preservation is not something done to the data in custody; it is a set of activities performed for a user group. The OAIS model expects that one defines the “designated community” of the preservation. The knowledge and needs of the designated community define what is required to keep the data usable. Defining the designated community is a challenging task, especially for institutions mandated to serve wide and diverse audiences. A practical example of the effect of the designated community is farmers’ reports of yearly crops. A genealogist might be happy to find the archived PDF form filled in by the farmer himself. However, a researcher of economic history probably wants to process the information statistically, requiring the same information in SPSS or Excel sheets. Who do the archives serve? The answer affects all parts of the long-term preservation system. AI brings new methods for processing archival information (e.g., Hand-written text recognition), meaning that archival records have new uses and new users. ODF as the Next Frontier in Public Record Preservation: Innovations for the AI-Driven Era Akiaka, Korea, Republic of (South Korea) Short Description ODF offers a transformative leap for public record preservation in the AI era. Its open, XML-based design safeguards semantic structure and streamlines machine processing, surpassing PDF/A-1’s static limitations. By facilitating advanced text analytics, metadata linkages, and cross-repository data mining, ODF enables innovative services—from intelligent classification to predictive policy modeling—while ensuring long-term interoperability and archival integrity. Abstract The exponential growth of artificial intelligence has intensified the need for an advanced, machine-readable format for long-term public record management. PDF/A-1, once deemed a reliable archival standard, has gradually exposed technical deficiencies: static structures, limited semantic fidelity, and high error rates during conversions of complex office documents. These limitations hinder large-scale text analytics, computational linguistics, and adaptive data modeling—capabilities that governments increasingly require to enhance policy formulation and deliver citizen-centric services. Open Document Format (ODF) emerges as a compelling alternative, distinguished by its structured XML-based approach to storing text, metadata, and design elements. Unlike proprietary or rigidly encoded formats, ODF intrinsically preserves semantic layers, thereby streamlining high-level machine processing. This trait is particularly salient for advanced algorithms in natural language understanding (NLU), knowledge extraction, and automated classification, as they can access textual components without convoluted parsing routines. Furthermore, ODF’s vendor-neutral orientation aligns seamlessly with international standards, reducing licensing constraints and ensuring interoperability across various software ecosystems. Such interoperability paves the way for innovative record-management applications. For instance, agencies could implement intelligent classification engines capable of contextual data extraction, enabling real-time generation of predictive policy models. Granular metadata tagging within ODF documents facilitates semantic linkages across separate repositories, allowing cross-institutional data mining and synergy. These processes unlock new opportunities for big-data analytics, collaborative governance, and citizen-driven engagement tools, ultimately elevating transparency and administrative efficiency. However, successful migration from PDF/A-1 to ODF demands a committed, forward-looking strategy. Beyond mere technical retooling, stakeholders must integrate robust training programs, refined metadata schemas, and clear legislative guidelines. This holistic approach ensures that standardization, semantic enrichment, and AI-driven transformations become embedded in each stage of the archival lifecycle. By embracing ODF, public institutions can safeguard the trustworthiness of records while pioneering cutting-edge services that harness the power of AI, thereby reinforcing their relevance and impact in an increasingly data-centric world. | ||