Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 23rd Sept 2025, 08:11:47pm CEST

 
 
Session Overview
Session
B1S1_PP: AI Integration and Readiness in Higher Education
Time:
Monday, 22/Sept/2025:
11:15am - 1:20pm

Session Chair: Mihaela Banek Zorica
Location: MG1/00.04

Plenary talks / 396 persons

Show help for 'Increase or decrease the abstract text size'
Presentations

Information Literacy and Artificial Intelligence: A Library and Information Science Perspective on Effects, Research Questions, Challenges and Opportunities

Joachim Griesbaum1, Stefan Dreisiebner2, Antje Michel3, Inka Tappenbeck4, Anke Wittich5

1University of Hildesheim, Germany; 2Carinthia University of Applied Sciences, Austria; 3University of Applied Sciences Potsdam, Germany; 4TH Köln – University of Applied Sciences, Germany; 5Hochschule Hannover University of Applied Sciences and Art, Germany

This article addresses the opportunities and challenges of artificial intelligence (AI) for information behavior and the promotion of information literacy. It summarizes the results of a workshop on information literacy and AI held in September 2024 in Hildesheim, Germany. In the workshop, thirteen librarians and information scientists who had previously written a position paper discussed three questions.

1. What impact does AI have on existing concepts of information literacy?

2. What questions arise from AI for information science research in the field of information literacy?

3. What challenges and opportunities arise in the promotion of information literacy through AI?

The position papers and results of the workshop have been published in German in Dreisiebner et al. (2024). However, the authors believe that the findings should also be presented and discussed on an international level. The main results are outlined below.

Regarding the first question, the spread of generative artificial intelligence (AI) is seen as transforming information markets by influencing the way information is provided and used. Search services and sources are expected to become less important, leading to both new opportunities and challenges in assessing information quality and transparency. AI can foster open science, reduce transaction costs, and change the role of users from consumers to producers of information. At the same time, there is a need for new literacy standards to address AI, especially in scientific work. Digital ethics and AI data analysis skills should be emphasized more in education.

Concerning the second question, several issues were discussed, including: How does user behavior in information markets change when AI generates new services and business models? How can the quality of AI-generated content be assessed, and its reliability ensured? Are labeling requirements and standards effective tools for AI regulation? How can bias be mitigated, and data protection and transparency ensured? How can AI skills and lifelong learning be promoted? Does the use of AI in research and teaching affect the quality of outcomes?

Finally, with respect to the third question, social responsibility and the role of users were identified as central themes. The opportunities of AI include automating routine tasks, supporting learning processes, and developing effective concepts for teaching information literacy. AI is expected to serve as a tutor or coach, helping learners develop a more critical approach to information. However, challenges arise from a lack of transparency, inadequate labeling, and legal uncertainties. Promoting information literacy requires interdisciplinary programs and well-trained educators. Users must critically assess AI to protect their autonomy. While the idea of an AI tutor was discussed, concerns were raised about loss of control and diminished intrinsic motivation.

In conclusion, it is clear that the spread of AI has a significant impact on information markets and information behavior, with profound implications for the concept and promotion of information literacy. The workshop represents a contribution from information science to foster understanding of how AI affects users, educational processes, and society. It not only identified key research areas but also highlighted the political need for education and regulation, discussing potential solutions.

References

Dreisiebner, S., Griesbaum, J., Michel, A., Tappenbeck, I., & Wittich, A. (2024). Informationskompetenz und Künstliche Intelligenz - Konzepte, Herausforderungen und Chancen. Ein Beitrag der Fachgruppe Informationskompetenz der KIBA. Hildesheim: Universitätsverlag Hildesheim.



From Action to Awareness: Ethical AI Literacy in Higher Education

Monika Krakowska, Magdalena Zych

Jagiellonian University, Krakow, Poland

Objectives

The integration of generative AI tools in academia raises ethical challenges in information literacy. This study examines students’ use of ChatGPT for information retrieval and creative tasks, focusing on ethical awareness, alignment with the globally recognized ACRL (2015) Framework for Information Literacy, particularly in attribution, source evaluation, and responsible use, as well as strategies for integrating AI literacy into higher education curricula. Given its widespread adoption in academic information literacy education, the ACRL Framework provides a relevant foundation for assessing ethical engagement with AI tools. This research highlights the need to prepare students for responsible interaction with AI.

Approach and Methodology

This study employs a mixed-methods approach, combining literature analysis, case study methodology, thematic analysis, and comparative analysis. It examines 84 library and information science (LIS) students from Jagiellonian University, who completed ethically complex tasks using ChatGPT. After providing consent, participants assessed information retrieval, content evaluation, and authorship attribution under anonymized conditions. Thematic analysis, supported by MAXQDA, identified patterns in ethical awareness, prompt engineering, and AI literacy, offering insights into students’ engagement with AI technologies in academic settings.

Results and Discussion

Significant disparities exist in students’ ethical engagement with AI tools. Many found ChatGPT outputs useful, yet few critically assessed accuracy or bias. Ethical concerns, including proper attribution and misinformation recognition (AI hallucinations), were inconsistently addressed. Despite the ACRL Framework’s emphasis on ethical information use, students demonstrated limited application of its principles, revealing AI literacy gaps. These findings highlight the need to expand the ACRL Framework to include AI-specific competencies, such as recognizing algorithmic bias, employing thoughtful prompt engineering, and critically evaluating AI-generated content (Carroll & Borycz, 2024; Kizhakkethil & Perryman, 2024; Ndungu, 2024). While findings are not directly generalizable, they reveal barriers, skill gaps, and pedagogical shortcomings in LIS education that inform further research on AI ethics in academia. This study contributes to knowledge on responsible AI use in higher education and provides actionable recommendations for integrating AI ethics into information literacy curricula. It underscores the need for systematic, ethically grounded AI literacy training in LIS programs, equipping students with competencies for critical and ethical AI engagement.

References

Association of College and Research Libraries. (2015). Framework for Information Literacy for Higher Education. Retrieved 13 August, 2025 from https://www.ala.org/sites/default/files/acrl/content/issues/infolit/Framework_ILHE.pdf

Carroll, A. J., & Borycz, J. (2024). Integrating large language models and generative artificial intelligence tools into information literacy instruction. The Journal of Academic Librarianship, 50(4): 102899.

Kizhakkethil, P., & Perryman, C. (2024). Student Perceptions of Generative AI in LIS Coursework. Proceedings of the Association for Information Science and Technology, 61(1): 531–536.

Ndungu, M. W. (2024). Integrating basic artificial intelligence literacy with media and information literacy in higher education. Journal of Information Literacy, 18(2): 122–139.



AI as a Gamechanger in Norwegian HEI – How are the Institutions Coping?

Ane Landoy1, Karin Cecilia Alexandra Rydving2

1The Norwegian Directorate for higher education and skills, Bergen, Norway; 2University of Oslo Library, Norway

Debates on the integration of digital technologies in higher education in Norway have been going on for some time, but still, the launch of ChatGPT in November 2022 was considered by many as something fundamentally different from the earlier developments. In this paper we will investigate the institutional responses to the emerging artificial intelligences in Norwegian academia. Recent research from the Nordic Institute for Studies of Innovation, research and education (NIFU) funded by the Directorate for higher education and skills (HK-dir) will be calibrated and compared with document studies from the web-sites of selected Norwegian universities and University colleges, and with results from a report by HK-dir . (HK-dir, 2025) - HK-dir | HK-dir.

While the studies from NIFU indicate that the higher education institutions’ initial perceptions of AI were as a regulatory issue that needed to be controlled, these studies were coupled with a lack of technological competence to fully consider the kind of transformation that artificial intelligence technology potentially represents. This, along with the sense of artificial intelligence being a “moving target”, “led higher education institutions to an initial state of organizational paralysis, in turn adopting a “wait and see” strategy.” (Korseberg & Eiken, 2024).

Further research from NIFU “shows that while the first phase after the launch was characterized by a lot of uncertainty and fear, the institutions are experiencing a change in mood among employees and students. At the same time, the rapid development and lack of knowledge and expertise about what generative artificial intelligence can and cannot do have made it challenging for institutions to develop concrete measures.” (Korseberg & Drange, 2024).

However, despite this call for regulations and the initial uncertainty and fear, the institutions have started responding. In this paper we provide examples of how HEIs now are developing both regulations and materials for teaching staff about AI. We do this by searching the web-pages from select institutions to find what kind of information and regulations they are providing for their teaching staff. We look for opportunities given for better pedagogical practices, workshops and university leadership support. We will also be interested in uncovering potential dissimilarities among institutions, where the focus of AI-trainings are found. Is it among the leadership, the teaching staff or the administrators? Where and how are the students’ perspective found?

In conclusion, our study will show the situation with AI in Norwegian HEIs at a moment in time. It will describe the measures taken by some institutions, and the concerns of the university sector.

References

HK-dir (2025). Kunstig intelligens i UH-sektoren. Moglegheiter, utfordringar og viktige område framover. HK-dir report 01/2025. Kunstig intelligens i UH-sektoren | HK-dir.

Korseberg, L., & Drange, C. V. (2024). Kunstig intelligens i høgare utdanning: kva tenkjer lærestadane nå, etter halvtanna år med ChatCPT? NIFU Innsikt 2024. Retrieved 17 August, 2025 from https://handle.net/11250/3133613

Korseberg, L., & Eiken, M. (2024). Waiting for the revolution: How higher education institutions initially responded to ChatGPT. Higher Education. https://doi.org/10.1007/s10734-024-01256-4.



“Of course, I can do it – I just don’t want to!”: An Emotional and Structural AI Readiness Scale

Anna Mierzecka1, Małgorzata Kisilowska-Szurmińska1, Marek Deja2, Karolina Brylska1

1University of Warsaw, Poland; 2Jagiellonian University, Poland

As a new technological and social phenomenon, Generative Artificial Intelligence amazes us with human-like features that set it apart from earlier information technologies. Among others, it offers more advanced conversationality (with emotional and phatic elements), which can appear during searching, evaluating, and selecting results and at other stages of information practices (defining information needs, communicating outcomes, creatively applying those outcomes to research and learning). The effective use of AI as a tool can be influenced by the level of information literacy but also—quite significantly—by trust and critical stance regarding this technology. This trust often depends on the users’ emotional stance, as they are not always able (e.g., due to information overload) to verify the responses they receive. The first tools to measure the declarative level of AI literacy (e.g., Lee & Park, 2024) are currently in development. However, we propose an approach that precedes AI literacy measurement: an instrument and a model for diagnosing readiness to use AI, which includes not only AI proficiency skills but also attitudes toward this technology.

This paper aims to investigate the readiness to use AI. The activities undertaken are based on three theoretical frameworks: 1) due to the novelty of AI as a technological solution being introduced into social use: the technology acceptance model (e.g., Venkatesh, 2000); 2) due to the information competencies necessary for using this solution: the theory of critical information literacy (Tewell, 2018); 3) due to the specificity of the tool (its conversational nature and the significant affective factor in using it): studies identifying users’ emotions and satisfaction as factors strongly influencing information behavior (Savolainen, 1995).

Methodologically, the study is oriented toward factor analysis. First, based on the existing literature, we have created a measurement instrument to test and validate it as an AI Readiness scale on a selected research sample (faculty members representing different disciplines). Second, we used a Multiple Indicators and Multiple Causes model to define institutional and personal reasons for using AI. The reasons considered are external and internal factors. The measurement indicators draw on previously available resources: AI literacy scales (e.g. Lee & Park, 2024; Ng et al., 2024) and IL scales (e.g., Pinto & Sales, 2010; Kurbanoglu, Akkoyunlu, & Umay, 2006) —as inspiration for designing components to evaluate four aspects of AI readiness: cognitive, behavioral, affective, and ethical.

Knowledge about a given group’s readiness—or lack thereof—to use AI will be valuable in educational practice and in efforts to protect unprepared individuals who are particularly susceptible and vulnerable to the negative consequences of improper or unethical use of emerging technologies. This concern also extends to professional groups that are expected to adopt these technologies at an early stage, including the academic community. Accordingly, we have chosen to dedicate our model to the members of this group.

References

Lee, S., & Park, G. (2024). Development and validation of ChatGPT literacy scale. Current Psychology, 43: 18992–19004.

Ng, D. T. K., Wu, W., Leung, J. K. L., Chiu, T. K. F., & Chu, S. K. W. (2024). Design and validation of the AI literacy questionnaire: The affective, behavioural, cognitive and ethical approach. British Journal of Educational Technology, 55(3): 1082–1104.

Pinto, M., & Sales, D. (2010). Insights into translation students’ information literacy using the IL-HUMASS survey. Journal of Information Science, 36(5): 618–630.

Savolainen, R. (1995). Everyday Life information seeking: Approaching information seeking in the context of “Way of Life”. Library & Information Science Research, 17(3): 259–294.

Serap Kurbanoglu, S., Akkoyunlu, B., & Umay, A. (2006). Developing the information literacy self‐efficacy scale. Journal of Documentation, 62(6): 730–743.

Tewell, E. C. (2018). The practice and promise of critical information literacy: Academic librarians’ involvement in critical library instruction. College & Research Libraries, 79(1): 10.

Venkatesh, V. (2000). Determinants of perceived ease of use: Integrating control, intrinsic motivation, and emotion into the technology acceptance model. Information Systems Research, 11(4): 342–365.



Beyond AI-literacy. Growing up with an Artificial Lifetime Compeer

Laszlo Z. Karvalics

Institute of Advanced Studies Kőszeg, Hungary

Two years ago, I started to imagine and describe the outlines of an AI-driven system in Hungarian, which transcends the state of art human-machine interaction models, integrating all existing and future apps, services, and tools, dedicated to support every kind of individual human need and forms of action in every age, for a lifetime. A primordial framework, which starts to exist when an individual first time interacts with an artificial intelligence entity: without fail, this initiation point will be a cybertoy, and the first human support comes from pre-trained parents.

But from this point, individual and their machine compeer becomes a co-habitation unit. A human and its artificial compeer, mutually shaped by their private interactions. Their growth is simultaneous and interconnected. I call the artificial part of this dual entity Artificial Lifetime Compeer (ALC). I am going to provide:

• a brief description of the ALC model, as an English language introduction of the concept

• an overview of ALC-related literacy landscape

• a brave reformulation of the AI-alignment discourse, as an outcome of ALC-based development thinking

ALC: An Ultimate Step to a Personified Artificial Intelligence

ALC is more than a personal digital assistant, an intelligent agent, a digital twin or a collaborative robot. ALC is not a nurse, not a servant, not a secretary, coach, agent, medical doctor, teacher, librarian, broker, or assistant. The contender solutions to support all these roles are always attached and interlocked to the basic structure of ALC, which orchestrates the growing „solution reservoir”, integrating and customizing every useful product (tools and services), concerning the human’s needs.

ALC as an “intelligent superagent” manages all the interface technologies and all the channels, conducted to smart environment. It coordinates the diverse systems of preventive and corrective health monitoring, influences dietary practices. Supports learning and information seeking behaviour. Performs transactions in real/virtual worlds.

The Human Part: Literacies for ALC-Intercompatibility

The human part should nurture a rich literacy complex to simultaneously understand, use and shape its artificial counterpart through the combination of assisted and spontaneous learning, in a mainly gamified environment.

The realistic vision of ALC needs to implement new branches of “teleological” knowledge about this co-evolutionary practice, while developing new skills and abilities in computer literacy, digital literacy, AI-literacy, algorithmic literacy and coding literacy.

AI-Alignment: Time to Extend the Horizon

ALC fulfils all the alignment mission in their own field and have a compelling effect on other fields to use ALC as an “alignment anchor”, when planning new tools and services. But “human compatible AI”, as Russell (2020) names it, only a one third of the alignment horizon. “AI compatible human” is the second part, and “Future-compatible AI-Human Hybrids” are the third. The idea of ALC provides a disquieting first base to reorient the current debates.

References

Russell, S. (2022). Artificial intelligence and the problem of control. In H. Werthner, E. Prem, E.A. Lee, & C. Ghezzi (Eds.), Perspectives on Digital Humanism. Cham: Springer.