Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 1st Oct 2025, 11:37:03pm CEST

 
 
Session Overview
Session
B1S3_BP: AI Literacy, Tools & Pedagogical Integration
Time:
Monday, 22/Sept/2025:
11:15am - 1:20pm

Session Chair: Heidi Enwald
Location: MG2/01.10

Parallel session; 80 persons

Show help for 'Increase or decrease the abstract text size'
Presentations

A Transdisciplinary Course on AI Literacy: From Concept to Reality

Anna C. Véron, Marco E. Weber, Gary Seitz

University of Zurich, Switzerland

The emergence of generative artificial intelligence (AI) technologies has rapidly transformed our landscape of knowledge creation and communication, raising great opportunities, but also significant challenges, particularly for students at higher education institutions (Furze, 2024). AI literacy—essentially an extension of information literacy—has come to the fore as a central skill to ensure the effective and ethical engagement with these new tools. In addition to providing a foundational understanding of the functional possibilities and limitations, AI literacy aims at fostering student’s abilities to critically evaluate the provenance of individual tools and models, as well as to consider the broader societal, environmental, and ethical implications of their use (UNESCO, 2021). To achieve this comprehensive understanding, perspectives from diverse disciplines must converge, making interdisciplinary collaboration essential for successful AI literacy training—not only among lecturers but also among students, who bring their own disciplinary backgrounds and perspectives into the learning process.

This talk introduces the design and implementation of the course “ChatGPT and Beyond: Interdisciplinary Approaches to AI Literacy” developed by the University Library Zurich and the School of Transdisciplinary Studies at the University of Zurich, Switzerland. The course aligns with the academic library’s mission to promote information literacy and aims to equip Bachelor’s and Master’s students from various disciplines with the skills needed to responsibly navigate and innovate within the rapidly evolving landscape of generative AI.

The seminar-style course features eleven lecturers from a range of disciplines including computational linguistics, information science, health science, art, and law. We combined flipped-classroom elements and practical workshops on generative AI tools, alongside discussions addressing critical issues such as societal implications, environmental concerns, stereotyped or biased content, and the power dynamics embedded in AI systems.

As a preliminary outcome of the course, the student’s assessment portfolios demonstrate that participants develop a nuanced understanding of AI, becoming equipped to critically assess and apply generative AI tools in ethical and effective ways.

Finally, we highlight the vital role of academic libraries in promoting both information and AI literacy. Libraries act as a melting pot for interdisciplinary collaboration, providing spaces where diverse perspectives can come together to tackle complex challenges. As key facilitators of knowledge and literacy, academic libraries are uniquely positioned to lead the way in fostering responsible engagement with generative AI technologies.

References

Furze, L. (2024). Practical AI Strategies: Engaging with Generative AI in Education. Melbourne, Australia: Amba Press.

UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. Paris. Retrieved 17 August 2025 from https://unesdoc.unesco.org/ark:/48223/pf0000381137



AI Literacy for Faculty: Librarians as Agents of Responsible AI Adoption

Tatiana Usova

Carnegie Mellon University in Qatar, Qatar

The rapid advancement of generative AI is profoundly influencing information-seeking behaviors within the academic community. While many scholars have integrated these cutting-edge technologies into their workflows, there is an increasing reliance on general AI tools such as ChatGPT, Perplexity, and Copilot in research processes. Conversations with faculty reveal widespread concerns about biases and hallucinations of these platforms and a general lack of awareness about AI tools intended for academic research.

The presentation showcases a comprehensive library program developed by the library at Carnegie Mellon University in Qatar designed to empower the academic community with essential knowledge of AI technologies and their application in research. While AI literacy in academic library instruction has traditionally focused on students, there is a significant gap in the literature regarding librarians’ contributions to support faculty scholars in understanding AI and leveraging its potential to enhance research practices. This initiative addresses that gap by offering a multifaceted program that includes presentations at faculty retreats, promotion of the AI LibGuide, hands-on workshops to explore new apps and master the art of prompting, and personalized support through one-on-one consultations. As information professionals, librarians take the lead in learning relevant AI tools, introducing them to faculty, and educating scholars on their functionalities, benefits, and limitations. The selected tools include university-subscribed text-extraction platforms such as Scite.ai and Keenious, AI assistants integrated into scholarly databases Scopus and Dimensions, and popular tools with free plans, including Research Rabbit and Consensus.

The presentation covers strategies for engaging stakeholders in testing AI tools and provides practical considerations for planning and executing similar faculty-focused programs. Additionally, it explores the faculty’s shifting attitude toward using generative AI tools in the research process. The overarching goal is to inspire librarians to take an active role in advancing AI literacy on campus, facilitate the responsible adoption of AI in research, and demonstrate how AI-powered tools can enhance research productivity and scholarly workflows. As generative AI continues to transform the landscape of academic research, librarians drive the effort of educating scholars on the intelligent and efficient use of these technologies, helping researchers to save time, enhance the quality of their work, and accelerate progress in their fields.



Faculty Views on Generative AI Tools – Case: Primo Research Assistant

Riikka Sinisalo, Essi Prykäri

LUT University, Finland

Generative Artificial intelligence (GenAI) tools, like ChatGPT have been a major topic of conversation for a few years now in the general public and academia. In the age of Generative AI, information retrieval has undergone a significant paradigm shift as seen in search engines and recommender systems. However, concerns have been raised on the quality of AI produced texts, and/or unethical practices challenging academic integrity (see Miao et al., 2024; Alkaissi & McFarlane, 2023). Could a GenAI tool using curated metadata and verified referencing be a more suitable option for higher education?

Primo Research Assistant

Ex Libris Primo Research Assistant was released as a beta version in late 2024. It uses Clarivate’s GenAI platform that is based on GPT 4 and it searches the Ex Libris Central Discovery Index (CDI) containing over 5 billion metadata records of peer-reviewed scientific literature. (Lecaudey, 2024) Primo Research Assistant enables users to make queries in natural language by asking research questions. The Research Assistant provides five sources, and an AI generated summary answering the question in the language that the question was asked. However, all the sources provided might not be available as full-text depending on database subscriptions.

The Survey

The university staff were asked to test the Research Assistant and answer a brief survey. The survey was conducted to open discussion, gather thoughts on the perceived usefulness and possible challenges of Primo Research Assistant. Furthermore, the results help the academic library make an informed decision whether to include the Research Assistant as a part of library’s Primo discovery service. In-depth interviews will be conducted during spring 2025.

The Results & Discussion

The preliminary results indicate that staff mostly have a positive outlook on the Research Assistant. Using GenAI was seen as a skill for future working life for students, and not being able to use GenAI tools would put them at a disadvantage. However, there were some concerns that the students do not familiarize themselves with the sources given and just use the summary provided, which might give them a skewed view on the topic. The summary provided might also encourage students to plagiarize, which mirrors the findings from the literature.

The academic librarians would appreciate more transparency in how the articles are selected and ranked. There are also concerns about the coverage of the Research Assistant, as some databases do not allow their metadata to be harvested. In addition, it was noted that the language the research question was asked had a bearing on the quality of the summary since the translation from English to Finnish varied and the sources were mainly in English.

Given the preliminary findings, the Research Assistant is likely to be deployed during summer 2025. Going onwards, to tackle some of the concerns the ethical use of Research Assistant will be included in information literacy teaching and guidance.

References

Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus, 15(2): e35179. https://doi.org/10.7759/cureus.35179

Lecaudey, T. (2024) Combining trusted contents and generative AI in the Primo Research Assistant. IFLA Asia Oceania Regional Division Committee. Retrieved 22 August, 2025 from https://repository.ifla.org/handle/20.500.14598/3664

Miao, J., Thongprayoon, C., Suppadungsuk, S., Garcia Valencia, O. A., Qureshi, F., & Cheungpasitporn, W. (2024). Ethical Dilemmas in using AI for academic writing and an example framework for peer review in nephrology academia: A narrative review. Clinics and Practice, 14(1), 89 –105. https://doi.org/10.3390/clinpract14010008



Generative Artificial Intelligence Skills in Schools: “It is an intelligence that is not natural, but it is created by a different intelligent form of life”

Konstantina Martzoukou, Chinedu Pascal Ezenkwu

Robert Gordon University, UK

The use of Generative Artificial Intelligence (GenAI) has generated a lot of debate in the past year, capturing the public imagination, sparking debate and intellectual discourse related to the future challenges and opportunities of an AI driven society. GenAI, which is made publicly available via general purpose GenAI tools, and is now also incorporated into search engines, is becoming an integral part of young people’s everyday lives, changing how they source and use information for learning and everyday life purposes. Carrying promises to enhance and even revolutionise education, GenAI presents a new era for personalised learning experiences, tailored support and accessible education. In this emerging GenAI reality, the need for fostering critical thinking, ethical awareness, information/digital literacy and resilience is necessary for equipping young people to navigate a rapidly changing world responsibly.

This paper presents the key results of a research and co-creation project, which explored young people’s engagement, perspectives of and experiences with GenAI tools. Empirical data were collected from eighteen underrepresented secondary school students (13-year-olds) (e.g., Black, Asian, minority ethnic and low socioeconomic groups, learning differences) via practical activities, focus groups and questionnaires. Based on their input, a series of animated video cartoon stories on GenAI were developed, allowing them to convey their voices and experiences. An open educational toolkit on GenAI was also developed with resources and activities to be used in class, facilitating conversations/engagement with the challenges and advantages of GenAI. Young people engaged directly with a GenAI tool to find information on UNESCO Sustainable Development Goals and explore issues related to bias, misrepresentation, and information literacy, via an imaginary scenario that involved the arrival of a GenAI teacher, who impersonated most of the characteristics of modern GenAI tools. With the direct input of students, different cartoon characters were created, who became the research participants’ ‘body doubles’ to explore pressing issues in the AI realities they experienced. In relation to text-based prompts, focus group questions addressed use: “What prompts (questions) would you advise your cartoon character to use if they searched for that topic using GenAI”?; transparency/trust: “Would your cartoon character know how this tool generates its content?”; information literacy: “Would your cartoon character find the generated text or visual relevant/accurate/current/credible/reliable/at the right level?; bias/ inclusion/discrimination: “Would your cartoon character find any bias showing in the AI outputs? Would they find that the responses treat all people equally?”; and privacy/data safety and security: “Are there any privacy, data safety and security risks for your cartoon character in using this technology?” In addition, positive uses of GenAI were explored in critical reflection: “Would using GenAI be helpful for your cartoon character?”. In relation to image based prompts the approach used was adapted from the BRIDGE project, asking critical information literacy questions: “What do see and what happens in the picture?”, “Would it be the same or different if a human made it?”, “Who is the one who created this image? Is it you or the GenAI tool?” and “What does the image make you feel like?”. Questionnaires explored young people’s use of GenAI tools, how they would describe GenAI to someone of their age, topics they had searched for already, the things they/not like, their confidence and feelings as well as what the future will look like. At the point of writing this abstract, the results are still being collected and analysed. This research project aims to make a positive impact on young people’s learning by bringing attention to the importance of understanding their changing GenAI empowered realities. Via its ‘co-creation’ approach, it involves educational interventions that shift the focus of research their direct engagement and their human rights (This work was supported by the Engineering and Physical Sciences Research Council [EP/Y009800/1], through Responsible Ai UK funding (RAI-SK-BID-00024).

References

Charisi, V., Chaudron, S., Di Gioia, R., Vuorikari, R., Escobar Planas, M., Sanchez Martin, J. I., & Gomez Gutierrez, E. (2022). Artificial Intelligence and the Rights of the Child: Towards an Integrated Agenda for Research and Policy, EUR 31048 EN, Publications Office of the European Union, Luxembourg. Retrieved 21 August, 2025 from https://op.europa.eu/en/publication-detail/-/publication/b7d0196a-eb8c-11ec-a534-01aa75ed71a1/language-en

Council of Europe (2019). Commissioner for Human Rights, Unboxing Artificial Intelligence: 10 steps to protect Human Rights (Recommendation). Retrieved 21 August, 2025 from https://rm.coe.int/unboxing-artificial-intelligence-10-steps-to-protect-human-rights-reco/1680946e64