Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Please note that all times are shown in the time zone of the conference. The current conference time is: 13th May 2026, 06:57:03pm BST
|
Agenda Overview |
| Session | |
L&T 01: Engaging with AI in Learning & Teaching
| |
| Presentations | |
Adapting and Responding to the Emergence of Generative AI: An Ethical Approach to Assessment University of York, United Kingdom Generative Artificial Intelligence (GenAI) has emerged as a serious disrupter of long-held beliefs about teaching and learning in universities. Many students and educators have acquiesced in the presumption that AI can benefit students’ learning by offering, for example, ways to plan contributions to class, or to organise written work by identifying core themes, or by explaining core concepts. Others believe that assessed work can be aided by adopting a jigsaw approach that links AI-generated content into a coherent whole, sufficient to satisfy examiners and deliver academic success. Others may limit their tolerance of AI as a tool to enhance writing skills, ensuring grammatically correct and intelligible English. How should we address these challenges, and what are the ethics around the use of GenAI, especially in assessment? The paper recommends institutional responses to GenAI that uphold the key foundations of effective teaching and learning while ensuring that assessment focuses on how students evidence their accumulated knowledge and demonstrate core competences, including critical analysis, creative skills, and ability to meet the demands of a range of assessment methods. Students as Actors: Changing the Perspective in the Discussion of GenAI Use University of Freiburg, Germany In this short paper I advocate for a change in perspective in the discussion of GenAI use at the university. Current research has pondered over the patterns, reasons and consequences of GenAI use by students, producing extremely helpful insights for university lecturers and management. However, most of this discussion presents students as individual actors making decisions based on a number of factors (level of competence, self-confidence, demographic and environmental characteristics...) and purposes (perceived importance and priority of tasks, considerations of efficiency, comparative performance and skill acquisition...). There is very little discussion about the collective interests and collective action by students, such as participating in the formulation and implementation of GenAI use policies and guidelines. Educators mostly discuss how students should be taught and guided. The conversation stops before considering how students should be enabled, as a group, to participate in decision-making processes, in the development of epistemic infrastructures in courses, study programs and higher education in general. My goal is thus to add this currently missing perspective and invite fellow scholars and educators to expand teaching and research practices to include students as decision-making actors on a collective rather than only individual level. GenAI Impact on Students’ Epistemic Agency and Responsibility: Three Dimensions University of Freiburg, Germany Teachers make students struggle: we put them in front of a steep hill and ask to ascend it. Much of current discussion about the introduction of Generative AI (GenAI), such as chatbots and teaching assistants based on Large Language Models (LLMs), is debating whether these technologies in education are similar to walking sticks or to a helicopter. If a student is lifted to the top of the hill in a helicopter, does she know how to climb it (after all, she got there) or does the pilot know how to do it? Which parts of inquiry can the student efficiently and legitimately delegate to GenAI without compromising her ability to evaluate and build knowledge and what does she have to struggle through herself? What type of struggle in learning is a pedagogical necessity and what type is a technical inefficiency? This paper explores the impact of GenAI on students’ epistemic agency - their capacity to transform information into reflectively endorsed beliefs and assume responsibility for them. It focuses on the scholarly disagreement on GenAI use in higher education: do these technologies serve as a legitimate support (walking sticks) or a substitute (a helicopter) for learners’ competence to produce knowledge? It unfolds this conundrum in three dimensions: structural, behavioral, and attitudinal. It argues that uncritical GenAI integration risks removing pedagogically necessary struggle mistaken for technical inefficiency. At the structural level, the paper argues that GenAI risks transforming learners from active creators of knowledge into passive "operators" in a distributed epistemic system. At the behavioral level, it warns that outsourcing cognitive work creates “cognitive debt” by delaying skill development in favor of quick and easy solutions to complex cognitive and social tasks. It argues that novices, lacking the baseline skills to evaluate AI outputs, cannot determine “appropriate” reliance and thus cannot escape GenAI over-reliance. At the attitudinal level, the paper explores how anthropomorphized tools cause identity threat and “moral deskilling", potentially leading to the disintegration of responsible agency. The paper concludes that the alleged gains in convenience come at the high social and political cost of undermining intellectual freedom and political accountability. Higher education must re-confirm the value and centrality of epistemic work to learning and intellectual life, introduce GenAI in a measured way, adapted to the learners’ skills, and demand design interventions that demystify the technology. Research in Transition - What AI Brings to Development of Research and Science Kozminski University, Poland Among many dimensions of use of Artificial Intelligence (AI) in academia, research is one of the topics that is discussed the most (see e.g. M. Chugunova et al. (2026) Who uses AI in research, and for what? Large-scale survey evidence from Germany, Research Policy 55, L. Panda (2025) Rethinking peer review in the AI era with responsibility and transparency, Elsevier, W. Strielkowski (2024) Could AI change the scientific publishing market once and for all? arXiv:2401.14952. https://doi.org/10.48550/arXiv.2401.14952). Certainly, the problem of research and AI is multidimensional. This is because research projects are usually complex undertakings, very demanding not only in terms of work, but they also require invention and innovation abilities as well as very often managerial skills, that not every academic possesses. This might be important reason why number of academics use AI tools in research, and that number is constantly and rapidly increasing in relatively short time span (44% of researchers used AI tools a few times (Chugunova et al. 2026, 3), 60% of that population use AI for work (R. Fieldhouse (2025) AI is saving time and money in research — but at what cost? Nature). Importantly, AI is used in research not as “a mere assistant” but “AI is a co-creator”, namely for “core research activities” such as “ideation and conceptual development” as well as “in the dissemination stage” “to help write research manuscripts“ (Chugunova et al. 2026). The above facts must not only impact research but they may also lead to substantial change of the “traditional” way of conducting research, that was known for the last few decades. With that said, the main objective of the paper is to discuss how AI may change research in the relatively near future, and what should be a response to the new, fast evolving phenomenon. Undoubtedly, AI may support development of research and science (e.g. helping to devise new vaccine that is urgently needed), but there are also red lines, that should not be crossed in this regard. | |

