Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 9th May 2025, 05:30:17pm IST

 
Only Sessions at Location/Venue 
 
 
Session Overview
Session
Online Session 2
Time:
Thursday, 20/Feb/2025:
2:20pm - 3:50pm

Virtual location: Zoom breakout 2

Please register in advance for this session here: https://dcu-ie.zoom.us/j/97518913062 Meeting ID: 975 1891 3062

Show help for 'Increase or decrease the abstract text size'
Presentations

Preparing Preservice Teachers for Ethical, Humanising AI Use: An in-process research collaboration

Leigh Graves Wolf1, Michelle Schira Hagerman2, Sajani Karunaweera2

1University College Dublin, Ireland; 2University of Ottawa, Canada

The widespread availability of Generative AI technologies has introduced both revolutionary advances and critical challenges to educational practices (Bearman & Ajjawi, 2022). As Generative AI technologies become increasingly integrated into all aspects of life, including but not limited to educational settings, the imperative to prepare preservice teachers with the contextual knowledge to use these tools is more critical than ever (Mishra et al., 2023). This practice report will share an in-process research collaboration which aims to explore the ethical dimensions of Generative AI in teacher education, and develop pedagogical strategies that empower pre-service teachers, and teachers of pre-service teachers, to understand and leverage generative AI systems ethically and critically. Informed by the concept of Entangled Pedagogy (Fawns, 2022), this project aims to develop complementary capacity-building between two universities: one in Canada and one in Ireland. It aspires to make a significant contribution to institutional practices by providing preservice teachers with evidence-based insights for navigating the evolving digital landscape with confidence and ethical awareness, and for providing faculty and staff who support educators with similar mechanisms for building capacity in AI Literacies.

The component of the research project we will discuss at the conference aims to investigate:

  • How do pre-service teacher candidates understand Generative AI systems (e.g. Chat GPT or Magic School AI) applied to teaching and assessment problems?

  • How do they intend to use Generative AI in their future teaching practice and why?

After receiving ethics approval and consent from students, evidence of understanding of Generative AI systems will be gathered through various artefacts (e.g. photos of mind-maps, in-the-moment conversations, lesson plans, and individual written reflections.) The UNESCO AI Competency framework for teachers (2024) provides a priori categories of analysis (Human Centred Mindset, Ethics of AI, AI foundations and Applications; AI pedagogy; AI for Professional Development) grounded in agreed-upon principles for ethical, humanising use of AI technologies in educational contexts.

By examining the nuanced ethical dimensions of Generative AI in teacher education, we hope to contribute to the development of a cohort of educators who are prepared to navigate the post-digital landscape armed with critical lenses and care.

References

Bearman, M., & Ajjawi, R. (2023). Learning to work with the black box: Pedagogy for a world with artificial intelligence. British Journal of Educational Technology, 54(5), 1160–1173. https://doi.org/10.1111/bjet.13337

Fawns, T. (2022). An entangled pedagogy: Looking beyond the pedagogy—technology dichotomy. Postdigital Science and Education, 4(3), 711–728. https://doi.org/10.1007/s42438-022-00302-7

Mishra, P., Warr, M., & Islam, R. (2023). TPACK in the age of ChatGPT and Generative AI. Journal of Digital Learning in Teacher Education, 39(4), 235–251. https://doi.org/10.1080/21532974.2023.2247480

UNESCO (2024). AI competency framework for teachers. https://unesdoc.unesco.org/ark:/48223/pf0000391104



A Mixed Methods Study Assessing How GenAI Generated Feedback Compares to Tutor Feedback on Capstone 3rd Level Research Projects

Francis Ward, Pia O'Farrell, Ernesto Panadero, Orna Farrell

Dublin City University, Ireland

This paper explores the potential of using Generative AI (GenAI) to support tutors in providing feedback on capstone research projects in education. Tutors currently limit feedback to the first three chapters due to workload constraints, leading to student dissatisfaction and concerns about the quality of the final chapters. To address these issues, this study examines how the feedback process can be enhanced by integrating GenAI, allowing tutors to prioritise feedback for analytical sections which require higher-order thinking.

Despite GenAI’s growing capabilities, university staff have been hesitant to adopt it productively due to a risk-averse approach, driven by concerns about penalties and ethical challenges (Ross, 2024). This limits how staff use GenAI for routine tasks, confining its application to teaching about AI rather than utilising it as a teaching tool. As students increasingly embrace AI technologies, educators need to integrate AI into teaching practices to prepare them for the evolving world of work (Cathcart in Ross, 2024).

This study investigates whether GenAI can generate high-quality feedback on procedural sections, enabling tutors to concentrate on providing more complex analytical feedback. It is hypothesised that while GenAI will excel in providing feedback on procedural and presentation tasks, human judgement will remain essential for complex analysis and interpretation.

This research builds on studies investigating “teacher-facing” applications of GenAI (Baker et al., 2019; Zawacki-Richter et al., 2019), emphasising the need for Higher Education Institutions (HEIs) to understand and integrate generative AI tools while preserving academic integrity (QQI, 2023). Existing literature highlights the central role of rubrics in GenAI-assisted assessment (e.g., Li et al., 2024) and suggests that although GenAI offers scalability in marking, it may not fully replace human judgement in assessing higher-order learning (Wetzler et al., 2024).

Employing a mixed methods approach with 182 students and 30 tutors, tutors will assess student work using a rubric, and selected submissions will be input into GenAI for marks and feedback generation. Tutors will then compare the AI-generated feedback with their own to evaluate its accuracy and usefulness. Data collection includes interviews with tutors to gather their perceptions of using GenAI, and quantitative analysis will compare rubric usage and marks awarded by tutors versus GenAI.

Although challenges such as tutor participation time, technical skills, and ethical considerations are anticipated, the study offers significant opportunities. By integrating GenAI into the assessment of procedural writing, tutors are enabled to focus on providing analytical feedback, potentially improving student performance. Furthermore, the study aligns with policy discussions emphasising ethical AI integration, ensuring that human oversight complements AI technologies in education (QQI, 2023).

In conclusion, this paper aims to demonstrate the potential of GenAI to enhance feedback quality, improve student outcomes, support tutor development, and contribute to educational policy and practice. By addressing current challenges in the large-scale provision of feedback, this research looks forward to the development of a more effective, technologically integrated educational landscape.

References

Baker, T. (2019). Educ-AI-tion Rebooted? Exploring the future of artificial intelligence in schools and colleges. Nesta Foundation. https://media.nesta.org.uk/documents/Future_of_AI_and_education_v5_WEB.pdf.

Li, J., et al. (2024). AI-assisted marking: Functionality and limitations of ChatGPT in written assessment evaluation. Australasian Journal of Educational Technology. https://doi.org/10.14742/ajet.9463

QQI (2023). Advice on artificial intelligence in education and training. https://www.qqi.ie/news/advice-on-artificial-intelligence-in-education-and-training

Ross, J. (2024). Higher education staff missing opportunity to use generative AI tools. Times Higher Education. https://www.timeshighereducation.com/news/ai-potential-squandered-universities-risk-focused-approach

Wetzler, E. L., et al. (2024). Grading the Graders: Comparing Generative AI and Human Assessment in Essay Evaluation. Teaching of Psychology. https://doi.org/10.1177/00986283241282696

Zawacki-Richter, O., et al. (2019). Systematic review of research on artificial intelligence applications in higher education. International Journal of Educational Technology in Higher Education, 16(1), 39. https://doi.org/10.1186/s41239-019-0171-0



From Mastery to Networks: Sociotechnical AI Systems and Human Agency

Ana Mouta1, Ana María Pinto-Llorente2, Eva María Torrecilla-Sánchez3

1Faculty of Education, University of Salamanca; 2Faculty of Education, University of Salamanca; 3Faculty of Education, University of Salamanca

From Mastery to Networks: Sociotechnical AI Systems and Human Agency

Abstract

This research explores the relationship between human agency and the use of sociotechnical AI technologies in educational contexts. It critically analyses how debates around ethics and notions of privacy have overshadowed key considerations of intimacy, secrecy, determination, and agency. Although AI applications in education are often promoted as transformative tools for enhancing various aspects of the learning experience, the empirical evidence supporting these claims remains limited or questionable – particularly when automation prioritises efficiency at the expense of agency on multiple levels. In this context, the study focuses specifically on how these technologies shape agency dynamics across the subjective, intersubjective, and collective dimensions as perceived by educators. Departing from traditional notions of agency as mastery or control, it adopts a framework of distributed agency, where action emerges from relational networks rather than being confined to individual entities.

To explore how educational actors collectively conceive the particularities of AI, especially concerning automation and its implications for human agency, this study employs a qualitative analysis of focus group discussions. The cohort consisted of 19 educators (10 males and 9 females) from five countries. The participants were selected based on their experience in K-12 teacher education and their proficiency in Spanish, the primary language of the research centre. Convenience sampling was initially used, followed by snowball sampling to further increase cultural diversity. While the sample size is relatively small, it is diverse in terms of cultural and professional backgrounds (e.g., associate professors, researchers, former K-12 teachers now holding governance roles in the Ministry of Education), ensuring a range of perspectives on the use of AI in education. Ethical approval for this study was obtained from the University of Salamanca, ensuring the protection of participants' rights and confidentiality.

The discussions addressed the use of AI applications in educational settings, with a specific focus on participants’ spontaneous reflections about different aspects of agency. The methodology enabled a comprehensive exploration of teachers' concerns. At the subjective level, teachers explored their role in fostering critical reasoning, decision-making, and moral development in students, acknowledging the potential negative impact of AI systems that provide overly rapid feedback, which may diminish students' sense of control, motivation, and emotional involvement. Teachers also express concerns about the impact of AI on individuality, subjectivity, and self-regulation, and question the relevance of AI evaluations in fostering self-reflectiveness and lasting academic engagement. At the intersubjective level, educators stress the importance of their authority and role as scaffolding figures. However, they largely overlook the impact of AI on peer relationships and the role of parents in supporting learning. At the collective level, teachers advocate for the preservation of schools as democratic spaces that foster creativity, plurality, and emotional synchrony, while warning against the threat posed by AI-enhanced technologies that may undermine critical pedagogy and collective educational experiences. Generally, teachers highlight the importance of integrating AI in a way that preserves meaningful educational experiences and supports organic, collaborative learning.

These findings reveal that while educators are attuned to their students’ subjective sense of agency – encompassing how they foresee, plan, self-regulate, and self-reflect – they often overlook the broader intersubjective and systemic implications of AI systems. This oversight risks fostering narratives that envision a diminished role for teachers in shaping their educational ecosystem cultures. By foregrounding these relational dynamics, the study highlights how AI’s mediating role complicates the maintenance of a sense of belonging in collective settings and engagement with moral judgment. It also underscores the need for further research into the long-term effects of AI on agency dynamics across diverse educational experiences. Ultimately, the study aims to advance the understanding of actants – entities that contribute to distributed agency – before they consolidate into dominant actors within educational networks. This endeavour calls for a collective responsibility among all educational stakeholders to deeply engage with the processes through which schools meaningfully fulfil their roles in qualification, subjectivation, and socialisation, nurturing these as shared and participatory actions.

References

Bandura, A. (2006). Toward a psychology of human agency. Perspectives on Psychological Science, 1, 164–180. https://doi.org/ 10.1111/j.1745-6916.2006.00011.x

Biesta, G., & Tedder, M. (2006). How is agency possible? Towards an ecological understanding of agency-as-achievement (Working Paper Five). Learning Lives: Learning, Identity, and Agency in the Life Course. Teaching and Learning Research Programme.

Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford University Press.

Latour, B. (2014). Agency at the time of the Anthropocene. New Literary History, 45(1), 1-18. https://doi.org/10.1353/nlh.2014.0003

Moore, J. W. (2016). What is the sense of agency and why does it matter? Frontiers in Psychology, 7: 1272. https://doi.org/10.3389/fpsyg.2016.01272

Pasquinelli, M. (2020). The eye of the master: A social history of artificial intelligence. Verso.

Priestley, M., Biesta, G., & Robinson, S. (2015). Teacher agency: An ecological approach. Bloomsbury



Integrating Design Thinking and Technology-Enhanced Learning in K-12 to Foster Socio-Scientific Understanding

Ahmed Mohammed1, Rafael Zerega1, Johanna Valender1, Nuno Otero2, Marcelo Milrad1

1Linnaeus University, Sweden; 2University of Greenwich

Background

Technology-enhanced learning (TEL) environments, driven by emerging technologies, can transform education by enhancing teaching and equipping students with essential 21st-century skills (Ragab et al., 2024). However, their effectiveness relies on addressing students' cognitive and emotional needs while supporting educators adapting to innovative pedagogies (Peterson et al., 2018). Student-centered approaches, while valuable, frequently neglect co-design processes and the need for professional development for teachers to integrate TEL effectively (Rajaram, 2023). Building on the Horizon Europe-funded Exten.(D.T.)² project, this study investigates how to integrate Design Thinking (DT) with interactive tools, such as ChoiCo, a gamified web-based platform that can foster creativity to address global challenges (Milrad et al., 2023). Additionally, the study explores the scalability of DT and TEL across diverse contexts, highlighting the need for open-ended, structured, feedback-driven activities to enhance teaching, foster student curiosity and creativity, and address critical Socio-Scientific Issues (SSI). Ultimately, the study presents an alternative pedagogical approach to addressing SSI issues while preparing students to thrive and contribute to a data-driven, interconnected world (Possaghi et al., 2024)

Study Design and Methodology

Two interventions were conducted at a school in Sweden during the Spring and Fall terms of 2024. The first involved 75 students, and the second engaged 4 mathematics teachers in workshops integrating DT, game design, and prototyping. Using a Design-Based Research approach, the interventions followed iterative design, testing, and refinement cycles, guided by continuous feedback (Jetnikoff, A., 2015). The first intervention included pollinator conservation activities, where students enhanced their problem-solving, programming, and analytical skills through ChoiCo game design. The second focused on equipping educators to develop curriculum-aligned games addressing Socio-Scientific Issues (SSI) like environmental hazards and global warming.

Discussion of Findings with Conclusion

The outcomes of the workshops highlighted both the potential and challenges of integrating DT with tools like ChoiCo. While students demonstrated strong engagement in design, problem-solving, and gameplay activities, they encountered difficulties in understanding game variables and DT phases, and technical issues with the platform. These findings underscore the importance of co-designing tools with students and educators to improve usability and foster creativity. Future actions include developing a real-time analytics dashboard to track progress, creating personalized learning pathways, and expanding the integration of 21st-century SSI to enhance multidisciplinary learning.
References

Jetnikoff, A. (2015). Design-based research methodology for teaching with technology in English. Practical Literacy: The Early & Primary Years, 20(1), 23–26. Retrieved from https://www.eric.ed.gov/?id=EJ1072345

Milrad, M., Herodotou, C., Grizioti, M., Lincke, A., Girvan, C., Papavlasopoulou, S., Shrestha, S., & Zhang, F. (2024). Combining design thinking with emerging technologies in K-12 education. In Proceedings of the 23rd Annual ACM Interaction Design and Children Conference.

Peterson, A., Dumont, H., Lafuente, M., & Law, N. (2018). Understanding innovative pedagogies: Key themes to analyse new approaches to teaching and learning. OECD Education Working Papers, No. 172. OECD Publishing. https://doi.org/10.1787/2adf8e21-en

Possaghi, I., Zhang, F., Sharma, K., & Papavlasopoulou, S. (2024). Design thinking activities for K-12 students: Multi-modal data explanations on coding performance. In Proceedings of the 23rd Annual ACM Interaction Design and Children Conference (pp. 290–306). https://doi.org/10.1145/3628516.3655786

Ragab, K., Fernandez-Ahumada, E., & Martínez-Jiménez, E. (2024). Engaging minds—Unlocking potential with interactive technology in enhancing students’ engagement in STEM education. In Interdisciplinary Approaches for Educators’ and Learners’ Well-being: Transforming Education for Sustainable Development (pp. 53–66).

Rajaram, K. (2023). Future of learning: Teaching and learning strategies. In Learning Intelligence: Innovative and Digital Transformative Learning Strategies: Cultural and Social Engineering Perspectives (pp. 3–53). Springer Nature Singapore.



Re-configuring Optimisation Paths in Data-informed Learning Analytics

Johanna Velander

Linnaeus University, Sweden

Designing and shaping data-driven educational technologies often starts with accepting the validity of the underpinning values that have shaped and promoted common goals and visions for a shared future with technology (Rahm, 2023). Technology promises to optimise processes, making them more efficient and effective. Optimisation therefore often means accuracy and efficiency since these yield maximum profit in data-driven contexts. Since competitiveness and profit are highly valued in society today the risk of “falling behind” in the race for adaptation to new technological developments such as AI (although it is not new at all) can be contradicted by identifying what we risk losing if we adopt and adapt to technology deterministic narratives without questioning the goals we are so afraid to miss.

Acknowledging this environment, I want to challenge the driving forces informing current data-driven practices in the educational context of learning analytics by imagining and co-constructing alternative LA solutions with stakeholders. The complexity of ML algorithms makes it difficult to engage with these explicitly, therefore this study would engage with these algorithms by “looking around, rather than inside, increasingly opaque and unknowable black boxes” (Perrotta & Selwyn, 2020. p. 254). More specifically, I take inspiration from Prinsloo's suggested broad framework for “engaging with the potential but also the curtailing dangers of algorithmic decision-making in education” (Prinsloo., 2017). The study design would also reveal tensions between stakeholder values regarding data-driven LA and the currently dominating individual control model (Solove & Hartzog, 2024). This study takes its departure from a master's thesis project already conducted (Velander, 2021) where students at a university in Sweden were asked to reflect on data-driven practices in general and at their institutions (Velander et al., 2021). A dashboard visualising data the current Learning Management System Moodle collected about them was developed and deployed by the author (at that point an MSc student) on the university servers. Students attending several courses had access to the dashboard1 for two weeks, here they could find visualisations of their data logged by the system. Insights from a survey distributed before the dashboard intervention revealed a low awareness of and a high acceptance towards data collection especially at the university. However, after having had access to the dashboard and seen the data used in different contexts many issues were highlighted that were specific to how the data was used, who could access it and what conclusions might be drawn from them. Further, results confirmed what has been noted by previous research: the powerlessness of the data subjects (students) to keep themselves informed, up to date and most of all being heard and contributing to some sort of change to current practices (Pangrazio & Selwyn, 2019; Solove & Hartzog, 2024).

Pangrazio, L., & Selwyn, N. (2019). ‘Personal data literacies’: A critical literacies approach to enhancing understandings of personal digital data. New Media & Society, 21(2), 419–437. https://doi.org/10.1177/1461444818799523

Rahm, L. (2023). Education, automation and AI: a genealogy of alternative futures. Learning, Media and Technology, 48(1), 6-24.

Solove, D. J., & Hartzog, W. (2024). Kafka in the Age of AI and the Futility of Privacy as Control (SSRN Scholarly Paper 4685553). Social Science Research Network. https://doi.org/10.2139/ssrn.4685553

Velander, J. (2020). Student awareness, perceptions and values in relation to their university data.

Velander, J., Otero, N., Pargman, T. C., & Milrad, M. (2021). “We Know What You Were Doing” Understanding Learners’ Concerns Regarding Learning Analytics and Visualization Practices in Learning Management Systems. In Visualizations and Dashboards for Learning Analytics (pp. 323-347). Cham: Springer International Publishing.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: Education after the algorithm
Conference Software: ConfTool Pro 2.6.153
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany