Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 23rd Sept 2025, 08:07:41pm CEST

 
 
Session Overview
Session
B3S1_PK: Pecha Kucha
Time:
Tuesday, 23/Sept/2025:
10:45am - 11:30am

Session Chair: Denis Kos
Location: MG1/00.04

Plenary talks / 396 persons

Show help for 'Increase or decrease the abstract text size'
Presentations

“Make students leaders of their learning”

Elena Collina, Paola Rescigno

University of Bologna, Italy

Objectives

At Bologna University, as Information Literacy librarians, we were asked to offer a course within the Transversal Competences project which provides 3 credits. Since 2020, the exam is held online on MSTeams and on Moodle. This teaching mode change pushed us to foster participation through creative learning to make students work in collaborative groups, in a non-judgmental atmosphere where mistakes may happen, believing that if you are not ready to make mistakes, you’ll never reach something new and original. We are convinced that ideas rising from a group are more likely to be successful than those each one can develop. Now the aim is to facilitate and promote the sharing and reuse of educational resources and to improve teaching quality through increased collaboration between actors in academia (teachers, students, administration, and management).

Methodology

Following this belief, we started the Peer Review activity. We encourage students coming from different courses to rate each other’s work. We use double-blind review where students need to respect and trust each other. The exchanged review is not a judgment but an exchange and a debate. We divide the class into groups of 5, and we assign each group the same topic, to elaborate as they choose. The form includes formulation of the research question, identification of keywords creation of queries, use of available tools, filters setting, 5 relevant references collection. Once this is done, we exchange the works and through a guide sheet, we ask to review the work done by another group, possibly replicating the work described. The distinction between trainers and trainees fades. We move from one-way communication to circular communication and we learnt much more effective communication. This is a “worthwhile” interchange, creating a new educational environment, where everyone plays their role in creating knowledge. This activity asks students to reflect on the process made, the analysis of the possible tracking made by others. Upon completion, groups do not report result feedback, but an experiential one. We are going to include some students’ feedback also in the presentation.

Outcomes

This activity proved to be a good way to transform our job from teaching students to helping them learn. We are no longer transferring information; they make sense of information through active engagement where information flows. Education is no longer about information, it is about how to use information.



Adapting Infosphère: Leveraging an OER Information Literacy Platform

Miriam Petrilli, Vincenzo Palatella

EPFL, Lausanne, Switzerland

In the rapidly evolving landscape of higher education, ensuring that students possess robust information literacy skills is crucial. Infosphère – a comprehensive, modular online information literacy training platform originally developed and released under a CC BY license by the Université du Québec à Montréal – provides a dynamic, customizable framework to help learners refine their abilities in navigating complex information ecosystems. Leveraging its status as an open educational resource (OER), EPFL library training team adapted Infosphère’s content to better align with the specific curriculum needs of bachelor’s and master’s students at our institution, EPFL academic contexts, and language and technology specificities.

This pecha kucha will detail the process and rationale behind customization of Infosphère. We will discuss how the platform’s open license facilitated the adaptation and the revision of instructional materials to better address disciplinary nuances and student learning outcomes. By sharing these insights, we aim to highlight how the OER nature of Infosphère not only supported intellectual freedom and pedagogical innovation, but also enabled iterative refinements based on the exchanges with stakeholders.

Attendees will gain practical knowledge about the implementation challenges and opportunities encountered throughout this adaptation journey. We will explore how our team collaborated with subject librarians, pedagogical and IT experts, and internal key players to establish a responsive, inclusive, and transparent development process. Moreover, we will discuss the adjustments needed to ensure that content and navigation patterns were accessible to learners with diverse backgrounds.

This session underscores the transformative potential of openly licensed educational resources, showcasing how tools like Infosphère can be molded to meet local needs while retaining their core mission of cultivating critical information literacy competencies. Ultimately, by embracing the adaptability and scalability of OER-driven platforms, academic libraries can reaffirm their commitment to fostering equitable and meaningful learning experiences for students across all stages of their academic journeys.



Citing AI with Zotero: Academic Integrity and Information Literacy

Paschalia Terzi

Georgetown University in Qatar, Qatar

Academic Integrity values are often highlighted in various information literacy guidelines; for example, the ACRL Framework states that learners should know how to cite the work of others (Framework for Information Literacy for Higher Education, 2015). Moreover, incorporating citation tools like Zotero into the research process is beneficial for students as a time-saving tool and as a gateway to both academic integrity (Huffman, 2014) and information literacy (Kuglitsch, 2015; Veach, 2019). To that effect, the instructional librarian, with the support of the Writing Center at a small but very diverse undergraduate university in the Middle East, has conducted numerous Zotero workshops to highlight the value of citing for academic integrity and how to save time and effort.

Nevertheless, various challenges arose since the release of ChatGPT in late 2022 and the subsequent conversation on the ethical use of AI tools. Students were familiar with citing books and journal articles but not with AI-generated text. The various citation styles like APA, MLA and Chicago had still not incorporated clear instructions for this material type in their handbooks. And, finally, the non-replicability and lack of straightforward sharing options, made citing AI-generated text challenging.

As a result, the instructional librarian designed and delivered two workshops for undergraduate students that were also attended by staff and faculty members due to high interest in the topic. They also produed a handout was with explanations on how to creatively use Zotero item fields to suit the needs of different citation styles (Chicago, APA, MLA). A library guide and a workshop for faculty are also planned for the future.

Based on the content of these workshops, this presentation is an original contribution that demonstrates practical ways to cite AI-generated text with citation tools like Zotero in popular citation styles. This practice can help undergraduate students understand and apply academic integrity values for the AI era by touching upon:

• what the major citation styles recommend about citing AI usage;

• how to consistently declare and be transparent when using AI tools (by saving queries, taking ChatGPT conversation snapshots, and providing permanent URLs); and

• ways to use AI ethically (e. g., brainstorming and keyword generation) recommended by UNESCO.

References

European Commission. (2024). Guidelines on the Responsible Use of Generative AI in Research Developed by the European Research Area Forum – European Commission. Directorate-General for Research and Innovation. Retrieved 28 August, 2025 from https://research-and-innovation.ec.europa.eu/news/all-research-and-innovation-news/guidelines-responsible-use-generative-ai-research-developed-european-research-area-forum-2024-03-20_en

ACRL. (2015). Framework for Information Literacy for Higher Education. Retrieved 28 August, 2025 from https://www.ala.org/acrl/standards/ilframework

Huffman, J. P. (2014). Citation managers as gateways to academic integrity. Journal of Business & Finance Librarianship, 19(2): 155–159. https://doi.org/10.1080/08963568.2014.883878

International Association of Scientific, Technical & Medical Publishers. (2023). Generative AI in Scholarly Communications. STM. Retrieved 28 August, 2025 from https://stm-assoc.org/wp-content/uploads/STM-GENERATIVE-AI-PAPER-2023.pdf

Kuglitsch, R. Z. (2015). Repurposing Zotero for sustainable assessment and scalable modified embedding. Reference Services Review, 43(1): 68–80. https://doi.org/10.1108/RSR-08-2014-0034

Veach, G. (2019). Information literacy instruction and citation generators: The provision of citation and plagiarism instruction. In Teaching Information Literacy and Writing Studies: Volume 2, Upper-Level and Graduate Courses (pp. 241–254). Purdue University Press. Retrieved 28 August, 2025 from https://muse.jhu.edu/pub/60/oa_edited_volume/chapter/2662947



Do AI Research Assistants Live Up to their Hype? An Exploratory Study of Some Freely Available Tools

Luis Machado

Publications Office of the European Union, Belgium

As artificial intelligence (AI) tools become increasingly widespread, “the ability to think critically and make balanced judgements about any information we encounter and use” (Information Literacy Group, 2018, 2), that is, one’s information literacy, is increasingly relevant. The answers provided by these AI tools powered by large language models (LLMs) are usually quite convincing and compelling. However, we must be aware that LLMs only know what an answer should sound like, which is different from knowing what the answer should be. Its epistemological foundation “is solely based in language, not in referring to reality or the world” (Flierl, 2024, 55).

Several of these AI tools that use LLMs to generate direct answers are designed to be digital research assistants. In addition to the increased number of standalone AI services, several discovery systems of libraries, academic and research services are integrating AI tools. This scenario compels users to be informed about the capabilities and limitations of such tools (Gusenbauer, 2023; Liu et al., 2023; Shah & Bender, 2024). In this study we intend to contribute to this debate with a qualitative assessment of some of these freely available tools.

Since we wanted to test freely accessible AI tools, we selected Ai2 ScholarQA and ORKG Ask, as they are open access, and the free plans of the Consensus, Elicit, Perplexity, SciSpace, and Undermind tools. The free plans of these tools limit some functionalities, but do not limit use to a finite number of days. All the tools selected are designed to act as research assistants and provide additional information on how they work.

Methodologically, we will formulate questions about a specific research topic, the development of knowledge organization systems (Machado et al., 2023). The assessment will first consider the difference in the type of output produced by the tools. Not all of them have the characteristic of presenting a more detailed output (so-called deep research). This type of output will be assessed in terms of the correctness, detail, and grounding of the statements. The other type of output will also be used to assess the consistency of the tool. A comparison will be made of the answers, and sources listed, given to the same question at different times and with different semantics (Tay, 2024). As for the sources, not only the capabilities of the tools are at stake, but also where they seek the sources they cite. Despite the limitations of the study, namely the restriction to one research topic, we believe that the result will contribute to a more informed use of these tools.

References

Flierl, M. (2024). Artificial intelligence and information literacy: Hazards and opportunities. In S. Kurbanoğlu, S., Špiranec, S., Ünal, Y., Boustany, J., Kos, D. (Eds.), Information Experience and Information Literacy, The Eight European Conference on Information Literacy, ECIL 2023, October 9-12, 2023: Revised Selected Papers. Part I. CCIS, vol. 2042. (pp. 52−63). Cham: Springer International Publishing.

Gusenbauer, M. (2023). Audit AI search tools now, before they skew research. Nature, 617(7961). https://doi.org/10.1038/d41586-023-01613-w

Information Literacy Group. (2018). CILIP Definition of Information Literacy. Chartered Institute of Library and Information Professionals. Retrieved 21 August, 2025 from https://infolit.org.uk/definitions-models/

Liu, N., Zhang, T., & Liang, P. (2023). Evaluating verifiability in generative search engines. In H. Bouamor, J. Pino, & K. Bali (Eds.), Findings of the Association for Computational Linguistics (pp. 7001–7025). Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.findings-emnlp.467

Machado, L. M. O., Martínez-Ávila, D., Almeida, M. B., & Borges, M. M. (2023). Towards a moderate realist foundation for ontological knowledge organization systems: The question of the naturalness of classifications. Journal of Information Science. https://doi.org/10.1177/01655515231160031

Shah, C., & Bender, E. M. (2024). Envisioning information access systems: What makes for good tools and a healthy web? ACM Trans. Web, 18(3). https://doi.org/10.1145/3649468

Tay, A. (2024). Avoiding misconceptions—Testing “AI search tools”—What not to do & some research questions to consider. Musings about Librarianship. Retrieved 21 August, 2025 from https://musingsaboutlibrarianship.blogspot.com/2024/12/avoiding-misconceptions-testing-ai.html?view=magazine



Doctoral Students Getting Support from University Library: Two Courses as Cases from Linköping University

Magdalena Öström, Kerstin Annerbo

Linköping University, Sweden

We will present newly established courses at Linköping University in Sweden, where academic librarians teach doctoral students information literacy and other skills needed during thesis work.

Doctoral students are an academic group that has often been overlooked in the past regarding support from the library. While considerable time has been devoted to teaching undergraduate students information literacy, and senior researchers have received customised research support, doctoral students have frequently fallen through the cracks.

According to the Swedish Higher Education Act undergraduate students should develop the ability to make independent and critical assessments, to distinguish, formulate and solve problems independently, to seek and evaluate knowledge at a scientific level and be able to follow the development of knowledge in their fields. This corresponds to several information literacy concepts. Further on the act says that education at advanced level and postgraduate level must build on and deepen this knowledge and these skills and abilities.

Based on these circumstances and following a reorganisation a few years ago, Linköping University Library decided to enhance our research support to meet doctoral students´ needs to a larger extent as well.

Currently, our library offers individual search support and introductory workshops on reference management systems, in addition to an introductory lecture titled “New PhD Student” and a web-based doctoral course, “Library Course for Doctoral Students”, comprising eight modules on various topics related to the research process. These modules can be completed with assignments for credit, but attendance without enrolling in the full course is also an option. Moreover, the library provides a credit-bearing, on-site course, “Literature searching and reviewing”, for one of the university’s engineering institutions and has recently been invited to hold lectures in a course at another engineering institution. The courses are constantly revised and do now include elements of AI-literacy and AI-based information searching as well as traditional search methods.

Within the Faculty of Medicine and Health Sciences library staff plays an integral part in two mandatory doctoral courses, “Scientific communication and information retrieval” and “Scientific methodology”. The library also offers lectures and workshops for doctoral groups upon request. Lastly, the library delivers a lecture known as “Soon PhD” which provides essential information that doctoral students need to acquire during the thesis publication process and in preparation for their PhD defence.

We would like to share our experiences, with focus on the credit-bearing course “Literature searching and reviewing” and the web-based course “Library Course for Doctoral Students” and discuss further improvement of our courses.



Information Literacy Tasks in Quebec French Schools: Conception and Validation of a Questionnaire addressing Teachers’ Practices

Joannie Pleau, Anne-Michèle Delobbe, Chantale Laliberté

Université du Québec à Rimouski, Canada

Invested in collective efforts to fight disinformation, didactics researchers are interrogating contemporary information literacy practices. In doing so, light needs to be shed on the actualization of didactics, especially in consideration of the growing importance of generative artificial intelligence (Steinhoff, 2023). In order to support teachers in their evolving role, teaching practices related to the integration of information literacy in linguistics (Gouvernement de l’Ontario, 2023) and the development of digital competencies (MEES, 2019) need to be investigated. As limited scientific data are available to guide didactics interventions, emerging teachers’ initiatives are essential to understand the situation. How are teachers planning and implementing the students’ learning tasks in information literacy? How are they evaluating the students’ learnings? Are teachers integrating generative artificial intelligence in their teaching of information literacy competencies?

This contribution’s objective is to present the initial results of research aimed to conceive and empirically validate a questionnaire designed to address information literacy tasks (Pleau, 2023) in contemporary teaching practices. Inspired by the works of Waitzmann & al. (2024), the methodological protocol required two phases: the conception and the empirical validation. The latter was implemented by three validation cycles: 1- expert validation; 2- focus group validation; and 3- factor analysis validation (Pohlmann, 2004; Sireci et Sukin, 2013). Each cycle provided feedback that led to the adjustment of the questionnaire. The empirical validation addressed a population of 1st to 6th grade French school teachers in Quebec. We creatd two distinct samples for the purpose of the validation cycles: focus group validation (N=6) and factor analysis validation (N=100). The initial data presented a primary portrait of tasks elaborated and implemented by teachers in different Quebec French schools. The collected data also provided a critical analysis of the questionnaire that will be part of a larger investigation.

References

Gouvernement de l’Ontario. (2023). Attentes et Contenus d’Apprentissage du Programme-cadre de Français de l’Ontario, de la 1re à 8e Année. Retrieved 28 August, 2025 from https://assets-us-01.kc-usercontent.com/fbd574c4-da36-0066-a0c5-849ffb2de96e/e58b0a61-bca4-4d75-9767-f42b1cae9a22/DomaineA_Francais.pdf

Ministère de l’Éducation et de l’Enseignement supérieur [MEES]. (2019). Cadre de Référence de la Compétence Numérique. Gouvernement du Québec. Retrieved 28 August, 2025 from http://www.education.gouv.qc.ca/fileadmin/site_web/documents/ministere/Cadre-reference-competence-num.pdf

Pleau, J. (2023). La Compréhension de l’Information en Ligne par l’Intégration, la Navigation et l’Evaluation Etude de cas d’internautes de 6e année du primaire [Doctoral dissertation]. Université du Québec à Montréal. Retrieved 28 August, 2025 from https://archipel.uqam.ca/17037/1/D4448.pdf

Pohlmann, J. T. (2004). Use and interpretation of factor analysis in the journal of educational research: 1992-2002. Journal of Educational Research, 98(1): 14–23.

Sireci, S., & Sukin, T. (2013). Test validity. In APA Handbook of testing and Assessment in Psychology. Test Theory and Testing and Assessment in Industrial and Organizational Psychology. Gesinger, K.F.

Steinhoff, T. (2023). «Littératie papier» ou «littératie numérique»? Les deux! – Réflexions sur une didactique post-numérique de la langue première par rapport à la lecture et l’écriture, dans le contexte particulier de l’intelligence artificielle. Lesforum, 3. https://doi.org/10.58098/lffl/2023/3/800

Waitzmann, M., Scholz, R., & Wessnigk, S. (2024). Testing quantum reasoning: Developing, validating, and application of a questionnaire. Physical Review Physics Education Research, 20(1). https://doi.org/10.1103/PhysRevPhysEducRes.20.010122