Information Literacy and Artificial Intelligence: A library and information science perspective on effects, research questions, challenges and opportunities
Joachim Griesbaum1, Stefan Dreisiebner2, Antje Michel3, Inka Tappenbeck4, Anke Wittich5
1University of Hildesheim, Germany; 2Carinthia University of Applied Sciences, Austria; 3University of Applied Sciences Potsdam, Germany; 4TH Köln – University of Applied Sciences, Germany; 5Hochschule Hannover University of Applied Sciences and Art, Germany
This article addresses the opportunities and challenges of artificial intelligence (AI) for information behavior and the promotion of information literacy. It summarizes the results of a workshop on information literacy and AI held in September 2024 in Hildesheim, Germany. In the workshop, thirteen librarians and information scientists who had previously written a position paper discussed three questions.
1. What impact does AI have on existing concepts of information literacy?
2. What questions arise from AI for information science research in the field of information literacy?
3. What challenges and opportunities arise in the promotion of information literacy through AI?
The position papers and results of the workshop have been published in German in Dreisiebner et al. (2024). However, the authors believe that the findings should also be presented and discussed on an international level. The main results are outlined below.
Regarding the first question, the spread of generative artificial intelligence (AI) is seen as transforming information markets by influencing the way information is provided and used. Search services and sources are expected to become less important, leading to both new opportunities and challenges in assessing information quality and transparency. AI can foster open science, reduce transaction costs, and change the role of users from consumers to producers of information. At the same time, there is a need for new literacy standards to address AI, especially in scientific work. Digital ethics and AI data analysis skills should be emphasized more in education.
Concerning the second question, several issues were discussed, including: How does user behavior in information markets change when AI generates new services and business models? How can the quality of AI-generated content be assessed, and its reliability ensured? Are labeling requirements and standards effective tools for AI regulation? How can bias be mitigated, and data protection and transparency ensured? How can AI skills and lifelong learning be promoted? Does the use of AI in research and teaching affect the quality of outcomes?
Finally, with respect to the third question, social responsibility and the role of users were identified as central themes. The opportunities of AI include automating routine tasks, supporting learning processes, and developing effective concepts for teaching information literacy. AI is expected to serve as a tutor or coach, helping learners develop a more critical approach to information. However, challenges arise from a lack of transparency, inadequate labeling, and legal uncertainties. Promoting information literacy requires interdisciplinary programs and well-trained educators. Users must critically assess AI to protect their autonomy. While the idea of an AI tutor was discussed, concerns were raised about loss of control and diminished intrinsic motivation.
In conclusion, it is clear that the spread of AI has a significant impact on information markets and information behavior, with profound implications for the concept and promotion of information literacy. The workshop represents a contribution from information science to foster understanding of how AI affects users, educational processes, and society. It not only identified key research areas but also highlighted the political need for education and regulation, discussing potential solutions.
References
Dreisiebner, S., Griesbaum, J., Michel, A., Tappenbeck, I., Wittich, A. (2024). Informationskompetenz und Künstliche Intelligenz - Konzepte, Herausforderungen und Chancen. Ein Beitrag der Fachgruppe Informationskompetenz der KIBA. Hildesheim: Universitätsverlag Hildesheim.
From Action to Awareness: Ethical AI Literacy in Higher Education
Monika Krakowska, Magdalena Zych
Jagiellonian University in Krakow, Poland
Objectives
The integration of generative AI tools in academia raises ethical challenges in information literacy. This study examines students' use of ChatGPT for information retrieval and creative tasks, focusing on ethical awareness, alignment with the ACRL (2015) Framework, particularly in attribution, source evaluation, and responsible use, and strategies for integrating AI literacy into higher education. Emphasizing ethical engagement, the research highlights the need to prepare students for responsible interaction with AI.
Approach and methodology
This study uses a mixed-methods approach, combining a literature review with a case study of 84 Jagiellonian University students. Participants completed ethically complex tasks with ChatGPT, evaluating information retrieval, content assessment, and authorship attribution. Thematic analysis, supported by MAXQDA, identified patterns in ethical awareness, prompt engineering, and AI literacy competencies, offering insights into students’ approaches to engaging with AI technologies in academic contexts.
Results and discussion
Significant disparities exist in students' ethical engagement with AI tools. While many participants found ChatGPT outputs useful, few critically assessed the accuracy or bias of the information. Ethical concerns, such as proper attribution of AI-generated content and the recognition of misinformation (AI hallucinations), were inconsistently addressed. Despite the ACRL Framework's emphasis on ethical information use, students demonstrated limited application of its principles. This highlights the need to expand the framework to include AI-specific competencies, such as identifying algorithmic bias, employing thoughtful prompt engineering, and critically evaluating AI-generated content. Emerging scholarly discussions propose integrating AI literacy standards into the ACRL Framework (Carroll & Borycz, 2024; Kizhakkethil & Perryman, 2024; Ndungu, 2024). Addressing this gap, the research underscores the importance of incorporating ethically grounded AI literacy into academic curricula. Participants explored AI’s ethical dimensions by choosing tasks completed with or without ChatGPT, fostering autonomy over prohibitive measures. Integrating ACRL standards with AI-specific guidelines equips students for ethical and critical engagement with AI systems. These findings contribute to the discourse on responsible AI use in higher education and offer actionable recommendations for comprehensive, ethics-focused AI literacy education.
References
Association of College and Research Libraries. (2015). Framework for Information Literacy for Higher Education. Retrieved from https://www.ala.org/sites/default/files/acrl/content/issues/infolit/Framework_ILHE.pdf
Carroll, A. J., & Borycz, J. (2024). Integrating large language models and generative artificial intelligence tools into information literacy instruction. The Journal of Academic Librarianship, 50(4), 102899.
Kizhakkethil, P., & Perryman, C. (2024). Student Perceptions of Generative AI in LIS Coursework. Proceedings of the Association for Information Science and Technology, 61(1), 531–536.
Ndungu, M. W. (2024). Integrating basic artificial intelligence literacy with media and information literacy in higher education. Journal of Information Literacy, 18(2), 122–139.
AI as a gamechanger in Norwegian Higher Education – how are the institutions coping?
Ane Landoy1, Karin Cecilia Alexandra Rydving2
1The Norwegian Directorate for higher education and skills, Norway; 2University of Oslo Librar
Debates on the integration of digital technologies in higher education in Norway have been going on for some time, but still, the launch of ChatGPT in November 2022 was considered by many as something different from the developments that had come before. In this paper we will investigate the institutional responses to the emerging artificial intelligences in Norwegian academia. Recent research from the Nordic Institute for Studies of
innovation, research and education (NIFU) funded by the Directorate for higher education and skills will be calibrated and compared with document studies from the web-sites of selected Norwegian universities and University colleges.
While the studies from NIFU indicate that the higher education institutions’ initial perceptions of AI were as a regulatory issue that needed to be controlled, coupled with a lack of technological competence to fully consider the kind of transformation that artificial intelligence technology potentially represents. This, along with the sense of artificial intelligence being a “moving target”, “led higher education institutions to an initial state of organizational paralysis, in turn adopting a “wait and see” strategy." (Korseberg & Eiken, 2024)
Further research from NIFU “shows that while the first phase after the launch was characterized by a lot of uncertainty and fear, the institutions are experiencing a change in mood among employees and students. At the same time, the rapid development and lack of knowledge and expertise about what generative artificial intelligence can and cannot do have made it challenging for institutions to develop concrete measures.” (Korseberg & Drange, 2024)
However, despite this call for regulations and the initial uncertainty and fear, the institutions have started responding. In this paper we provide examples of how HEIs now are developing both regulations and materials for teaching staff about AI.
References
"Of course, I can do it—I just don't want to!": AI readiness scale in the context of academic research activities
Anna Mierzecka1, Małgorzata Kisilowska-Szurmińska1, Marek Deja2, Karolina Brylska1
1University of Warsaw, Poland; 2Jagiellonian University, Poland
Generative artificial intelligence, as a novel technological and social phenomenon, astonishes with human-like capabilities that set it apart from earlier information technologies. While the first instruments for measuring declarative AI literacy are now being developed, we propose a prior step: an instrument and model for diagnosing readiness to use AI—one that encompasses proficiency, attitudes toward the technology, and the situational context of its use. We focus on academic research activities, where the technical and creative potential of AI tools can markedly influence performance.
Methodologically, the study relies on factor analysis. First, drawing on the literature, we created a measurement instrument and validated it as an AI-Readiness Scale with a sample of faculty members from multiple disciplines. Second, we employed a multiple-indicators-multiple-causes (MIMIC) model to identify situational, institutional, and personal factors associated with AI readiness. Measurement indicators were inspired by the technology-acceptance model, existing AI-literacy scales, and information-literacy scales, allowing assessment of four facets of AI readiness: cognitive, behavioral, affective, and ethical.
These findings are preliminary and require replication with a larger, probability-based sample before firm conclusions can be drawn. At present, we might say that accessibility of AI tools and perceptions of their usefulness appear insufficient to motivate their application in research, whereas stress linked to job demands and digital transformation has the opposite effect. Readiness is instead fostered by perceived pressure—whether institutional or peer-based—regarding academic performance. Academics view AI tools most favorably in areas where supporting software (e.g., SPSS or reference managers) is already widely accepted and transparent, notably for analytical and editorial tasks. These insights also guide university implementation strategies: beyond providing access and training, institutions should first address organizational culture and communicate clearly the opportunities and expectations surrounding new technologies
Beyond AI-literacy. Growing up with an Artificial Lifetime Compeer (ALC)
Laszlo Z. Karvalics
Institute of Advanced Studies Kőszeg, Hungary
There is no concerted thinking about the personal/individual prospects of digital ecosystem, improved by powerful Artificial Intelligence solutions.
Two years ago, I started to imagine and describe the outlines of an AI-driven system, which transcends the state of art models, integrating all existing and future apps, services, and tools, dedicated to support every kind of individual human need and forms of action in every age, for a lifetime. A primordial framework, which starts to exist when an individual first time interacts with an artificial intelligence entity: without fail, this initiation point will be a cybertoy, and the first human help comes from a pre-trained parent.
But from this point, individual and their machine compeer becomes a co-habitation unit. A human and its artificial compeer, mutually shaped by their private interactions. Their growth is simultaneous and interconnected. For a lifetime. I call the artificial part of this dual entity Artificial Lifetime Compeer (ALC).
ALC is more than a personal digital assistant, an intelligent agent, a digital twin or a collaborative robot. ALC is not a nurse, not a servant, not a secretary, coach, agent, medical doctor, teacher, librarian, broker, or assistant. The contender solutions to support all these roles are always attached and interlocked to the basic structure of ALC, which orchestrates the growing „solution reservoir”, integrating and customizing every useful product (tools and services), concerning the human’s needs.
ALCs have a BIOS-like, identical core. This fully unique and customized, initial basis of the artificial compeer is programmable (only) by its human counterpart, connects and controls implants and wearables. ALC core should be migratable to new and new, more complex hw/sw frameworks. It randomly keeps sequences from their unique interaction data stream and uses them for secure identification and authentication of the human associate, revolutionizing the existing, cryptography or biomarker-based solutions.
ALC as an “intelligent superagent” manages all the interface technologies and all the channels, conducted to smart environment. It coordinates the diverse systems of preventive and corrective health monitoring, influences dietary practices. Supports learning and information seeking behaviour, standing for personal intellectual and physical development. Performs transactions in real and virtual worlds. Communicates with other ALCs.
ALC fulfils all the alignment mission in their own field and have a compelling effect on other fields to use ALC as an “alignment anchor”, when planning new tools and services. But “human compatible AI”, as Russell names it, only a one third of the alignment horizon. “AI compatible human” is the second part, and “Future-compatible AI-Human Hybrids” are the third. The idea of ALC provides a disquieting first base to reorient the current debates.
The realistic vision of ALC provides a new framework for the discourse on computer literacy, digital literacy, AI-literacy, algorithmic literacy and coding literacy, since the human part should have an extremely complex literacy landscape to simultaneously understand, use and shape its artificial counterpart through the combination of assisted and spontaneous learning.
|