Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 10th May 2025, 03:43:19am IST

 
Only Sessions at Location/Venue 
Only Sessions at Date / Time 
 
 
Session Overview
Date: Thursday, 20/Feb/2025
2:20pm - 3:50pmOnline Session 1
Virtual location: Zoom breakout 1
 

Wild Web: Lifelong Learning and Teaching in Times of Digital Ubiquity

Miriam Reynoldson

RMIT University, Australia

When we remove formal education and its trappings, what might teaching and learning look like in the (postdigital) wild?

This 15-minute provocation positions informal lifelong learning as a potential rewilding of education. Echoing Biesta (2017), the point of education is to "learn something, ... learn it for particular reasons, and ... learn it from someone” (pp. 27-28). What might this look like outside of the increasingly algorithmic structures of higher education? If as Cormier (2024) suggests, in times of digital abundance the community is the curriculum, what would be needed to support a learning society in which each of us is always a potential learner and teacher?

This conceptual presentation draws on theory from my doctoral research, which explores the value of informal lifelong learning practices of adults in conditions of digital ubiquity. I want to set aside the formal codification of higher education - qualification levels, rankings, measured outcomes, all the instruments through which education systems quantify and abstract learning itself.

Instead I consider the informal-yet-intentional forms of learning in which we engage to learn throughout life in the (post)digital age: YouTube tutorials, mentoring relationships, online communities of interest, writing workshops. They cannot easily be quantified. They frequently subvert the master-pupil dynamic of organised schooling. They inevitably straddle both digital and embodied worlds, and always hold the presence or echoes of others with whom to learn and teach.

Alheit, P. (2022). ‘Biographical learning’ reloaded. Adult Education Critical Issues, 2(1), 7–19. https://doi.org/10.12681/aeci.30008

Biesta, G. J. J. (2021). Holding oneself in the world: Is there a need for good egoism? Meeting the challenges of existential threats through educational innovation. Routledge. 115–126.

Biesta, G. J. J. (2017). The Rediscovery of Teaching (1st ed.). Routledge. https://doi.org/10.4324/9781315617497

Cormier, D. (2024). Learning in a time of abundance: The community is the curriculum. Johns Hopkins University Press.

Jackson, N. (2011). Learning for a complex world: A lifewide concept of learning, education and personal development. Authorhouse.

Jarvis, P. (2007). Globalization, lifelong learning and the learning society. Routledge.

Watters, A. (2025, February 14). Discrimination engines. Second Breakfast. https://2ndbreakfast.audreywatters.com/personalization-ruptured-were-all-in-this-together/



Mapping AI Literacy Frameworks: An Analysis of the Evolving Metaphorical Relationships Between Students, Teachers, and AI

Kaitlin A Lucas1, Alberto Lioy2

1Central European University, Austria; 2University of Hradec Králové, Czech Republic

Is artificial intelligence a form of black magic (Lao, 2020)? Are we dragon riders taming a technological beast (Bozkurt, 2024)? Rich metaphors abound within the growing body of research surrounding AI literacy, and there is no better place to look for them than within the rapidly proliferating number of AI literacy frameworks.

Conceptual metaphors and AI literacy frameworks, which outline “the essential abilities that people need to live, learn and work in our digital world through AI-driven technologies” (Ng et al., 2021), complement each other by organizing our perception of this complex topic and coordinating our actions in response. Both are dynamic, evolving as our understanding of AI and literacy changes. However, they share common challenges: if too simple, they are deemed too essentialist or limiting; if too complex, they lack utility (Lakoff & Johnson, 2008; Petrie & Oshlag, 1993).

Through idiographic metaphor analysis (Redden, 2017), we coded the metaphors in eighteen AI literacy frameworks to uncover the underlying metaphorical relationships between AI, students, and teachers. We highlight the dominant metaphors for each actor: AI-as-tool-transformer-ubiquitary-artefact-threat; student-as-analyst-citizen-creator; and teacher-as-designer-guide. We then discuss the natural connections between these metaphors and several tensions that arise. Among these we include the need to converge diverse models of AI literacy, discordance between the metaphorical view of literacy as power and/or adaption within the frameworks (Scribner, 1984), and a misfocus on individual literacy at the expense of the collective.

At the end of the session, we discuss directions for AI literacy that address these tensions and consider our individual and collective capacity in higher education to shape or reject an AI-saturated future.

References

Bozkurt, A. (2024). Why generative AI literacy, why now and why it matters in the educational landscape?: Kings, queens and GenAI dragons. Open Praxis, 16(3), 283-290.

Lakoff, G., & Johnson, M. (2008). Metaphors we live by. University of Chicago Press.

Lao, N. (2020). Reorienting machine learning education towards tinkerers and ML-engaged citizens (Doctoral dissertation, Massachusetts Institute of Technology).

Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence, 2, 100041.

Petrie, H. G., & Oshlag, R. S. (1993). Metaphor and learning. Metaphor and Thought, Second Edition. Cambridge University Press.

Redden, S. M. (2017). Metaphor analysis. The international encyclopedia of communication research methods, 1-9.

Scribner, S. (1984). Literacy in three metaphors. American journal of education, 93(1), 6-21.



Reclaiming Creativity and Human Agency: A Framework for Learner Engagement in Post-AI Education

Aleksandra Shornikova

Dublin City University, Ireland

The growing presence of artificial intelligence (AI) is becoming difficult to deny. While some scholars and practitioners remain apprehensive, debating the benefits and drawbacks of technological innovation, others are beginning to embrace AI-enhanced tools. However, the rapid spread of AI poses profound challenges to traditional notions of creativity and learning (Hamed et al., 2024; Henriksen et al., 2024; Kosslyn, 2024). Grounded in frameworks of critical AI literacy (Velander et al., 2024) and theories of creativity as a process-oriented rather than product-focused experience (Blanche, 2007), this report introduces a conceptual framework for reclaiming creativity and human agency in post-AI higher education. It argues that fostering critical engagement, reflective practice, and problem-solving can promote meaningful learner engagement.

Central to this framework is the need to clearly define the role AI plays and its limitations in higher education. By building on Kosslyn’s (2024) idea of AI as a “cognitive amplifier,” this framework reconceptualises AI as a technology that enhances human strengths and compensates for our limitations without displacing uniquely human qualities. The concept of “critical creativity” (Titchen and McCormack, 2010) underpins this perspective, providing a foundation for integrating reflective and imaginative methodologies into educational practices that prioritise process over results. This framework positions learners as active participants, critically engaging with AI as a collaborative partner in co-creating knowledge and generating ideas. However, as Kosslyn (2024) notes, AI lacks volition or agency, and therefore cannot share responsibility in the same way a human collaborator can. Engaging critically with AI requires mastering a range of mental skills, including attention to detail, language comprehension and expression, and ethical discernment.

While concerns over the automation of human tasks and the commodification of creativity are valid, they reflect a narrow focus that limits our ability to address the broader implications of AI in higher education. This framework advocates for a shift in thinking: what if we prioritise the process of learning and creating rather than the outcomes? Instead of competing with AI or resisting its presence, educators can adopt emerging frameworks that empower learners to develop uniquely human skills. By reclaiming creativity and human agency in the post-AI era, this approach envisions a future where learners navigate and shape their world with imagination, ethical awareness, and a sense of purpose. Ultimately, this conceptual framework ensures that AI serves as a catalyst for human flourishing rather than a force of dehumanisation in education.

References:

Blanche, E. I. (2007). The Expression of Creativity through Occupation. Journal of Occupational Science, 14(1), 21–29. https://doi-org.dcu.idm.oclc.org/10.1080/14427591.2007.9686580

Hamed, A.A, Zachara-Szymanska, M., & Wu, X. (2024). Safeguarding Authenticity for Mitigating the Harms of Generative AI: Issues, Research Agenda, and Policies for Detection, Fact-Checking, and Ethical AI. iScience, 27(2), 108782. https://doi.org/10.1016/j.isci.2024.108782

Henriksen, D., Mishra, P., & Stern, R. (2024). Creative Learning for Sustainability in a World of AI: Action, Mindset, Values. Sustainability, 16(11), 4451. https://doi.org/10.3390/su16114451

Kosslyn, S.M. (2024). Learning to Flourish in the Age of AI (1st ed.). Routledge. https://doi-org.dcu.idm.oclc.org/10.4324/9781032686653

Titchen, A., & McCormack, B. (2010). Dancing with stones: critical creativity as methodology for human flourishing. Educational Action Research, 18(4), 531–554. https://doi-org.dcu.idm.oclc.org/10.1080/09650792.2010.524826

Velander, J., Otero, N., Milrad, M. (2024). What is Critical (about) AI Literacy? Exploring Conceptualizations Present in AI Literacy Discourse. In: Buch, A., Lindberg, Y., Cerratto Pargman, T. (eds) Framing Futures in Postdigital Education. Postdigital Science and Education . Springer, Cham. https://doi-org.dcu.idm.oclc.org/10.1007/978-3-031-58622-4_8



Rage Against The Machine? Buddhist Ethics and Algorithmic Justice.

David Webster

University of Liverpool, United Kingdom

The US band Rage Against the Machine's epnymously titled first album featured the iconic photograph of Vietnamese monk Thích Quảng Đức enaged in an act of self-immolation. This session will be a discussion of how we can delve into the details of Buddhist psychological theory (as in the Abhidhamma texts) which repsent a complex account of consciousness, with some purpose. These texts offer an orientation to the analysis of cosnciousness which seeks to ultiemately forgeround ethics. What can we learn as we wrangle with the shuffling simulcra of thought that GAI offers?

This session will be based around some short pre-shared readings (shared before the event) and participants will work as a team to see iif we can (non-artificaly) generate a short blog post which articulates the Buddhist psycholoigcal ethics onto the ethical lacunae of generative AI.



Higher Education In Crisis: Is Generative AI The Cavalry Or A Trojan Horse?

Nick Baker

University of Windsor, Canada

Higher education is facing imminent and significant crises in a number of countries, including Canada, Australia, and the UK, as a result of chronic neglect by governments, declining public trust and interest in the sector, and populist and xenophobic policies targeting marginalised international students. With higher education institutions (HEIs) in Canada, the UK and Australia all facing the real prospect that not just individual institutions, but whole sectors may collapse in the very near future, there are increasingly extreme measures being taken in attempts to reach fiscal sustainability. Leaders are trying to balance the damaging impact of short-term fiscal strategies (e.g. cutting staff and programs) against maintaining enough of the core functionality to be able to rebuild in the future (Hale et al., 2006). In the scramble to find efficiencies and transform processes, higher education institutions (HEIs) often turn to consultants with thoughts of “digital transformation” as a ready-made solution, examining all processes for ways to use technology to increase productivity, reduce the time needed for tasks, and often ultimately to reduce the size of the workforce (Blackburn et al., 2020).

In the past, the potential for achieving cost savings through digital transformation have been modest, and were rarely a quick fix, requiring capital, human and cultural investments to see dividends (Rof, Bikfalvi & Marques, 2020). In the current zeitgeist however, a combination of near-possibilities and magical thinking related to generative AI’s transformational capabilities may trigger poorly controlled changes in higher education that may be undesirable and irreversible. While there are competing narratives of AI as a replacement for human work vs an augmentation or assistant for humans, in times of crisis humans tend to make decisions that address the acute need, but which may be less strategically sound. In effect, their decisions may ‘eat the future’ (Usher, 2024). There are emerging examples of AI agents replacing human functions in clerical and support tasks across HE, as there are for other industries, but also in teaching as budgets for part-time instructors and teaching assistants disappear, and research as governments demand more return on research investments (e.g. Bates et al., 2020).

It is possible that the critical and thoughtful embedding of AI across our work may lead to better outcomes for students and staff in the medium to long term. However, when survival is the core driver of decisions and normal checks and balances are suspended, the long-term implications may not be weighed as carefully as they should, and the potential for the further enshittification of higher education seems high (Doctorow, 2022).

This session will discuss the implications of seeing AI as a potential saviour of higher education systems in distress, and the need for balancing short-term gains and longer-term system change to achieve a new form of sustainability.

References:

Bates, T., Cobo, C., Marino, O., and Wheeler, S. (2020). Can artificial intelligence transform higher education? International Journal of Educational Technology in Higher Education. 17(42). https://doi.org/10.1186/s41239-020-00218-x

Blackburn, S., LaBerge, L., O’Toole, C., and Schneider, J. (2020). Digital Strategy in a Time of Crisis: Now is the time for bold learning at scale. McKinsey Digital. Online: https://kolnegar.ir/wp-content/uploads/2020/07/Digital-strategy-in-a-time-of-crisis.pdf Accessed: 27 November, 2024.

Doctorow, C. (2022). Social Quitting. Online: https://doctorow.medium.com/social-quitting-1ce85b67b456 Accessed: 28 November 2024.

Hale, J.E., Hale, D.P., and Dulek, R.E. (2006). Decision processes during crisis response: An exploratory investigation. Journal of Managerial Issues, 18(3): 301-320. https://www.jstor.org/stable/40604542

Rof, A., Bikfalvi, A., and Marques, P. (2020). Digital Transformation for Business Model Innovation in Higher Education: Overcoming the Tensions. Sustainability. 12(12), 4980. https://doi.org/10.3390/su12124980

Usher, A. (2024). Eating the future. Higher Education Strategy Associates Blog, 9 September, 2024. Online: https://higheredstrategy.com/eating-the-future/ Accessed: 27 November, 2024.

 
2:20pm - 3:50pmOnline Session 2
Virtual location: Zoom breakout 2
 

Preparing Preservice Teachers for Ethical, Humanising AI Use: An in-process research collaboration

Leigh Graves Wolf1, Michelle Schira Hagerman2, Sajani Karunaweera2

1University College Dublin, Ireland; 2University of Ottawa, Canada

The widespread availability of Generative AI technologies has introduced both revolutionary advances and critical challenges to educational practices (Bearman & Ajjawi, 2022). As Generative AI technologies become increasingly integrated into all aspects of life, including but not limited to educational settings, the imperative to prepare preservice teachers with the contextual knowledge to use these tools is more critical than ever (Mishra et al., 2023). This practice report will share an in-process research collaboration which aims to explore the ethical dimensions of Generative AI in teacher education, and develop pedagogical strategies that empower pre-service teachers, and teachers of pre-service teachers, to understand and leverage generative AI systems ethically and critically. Informed by the concept of Entangled Pedagogy (Fawns, 2022), this project aims to develop complementary capacity-building between two universities: one in Canada and one in Ireland. It aspires to make a significant contribution to institutional practices by providing preservice teachers with evidence-based insights for navigating the evolving digital landscape with confidence and ethical awareness, and for providing faculty and staff who support educators with similar mechanisms for building capacity in AI Literacies.

The component of the research project we will discuss at the conference aims to investigate:

  • How do pre-service teacher candidates understand Generative AI systems (e.g. Chat GPT or Magic School AI) applied to teaching and assessment problems?

  • How do they intend to use Generative AI in their future teaching practice and why?

After receiving ethics approval and consent from students, evidence of understanding of Generative AI systems will be gathered through various artefacts (e.g. photos of mind-maps, in-the-moment conversations, lesson plans, and individual written reflections.) The UNESCO AI Competency framework for teachers (2024) provides a priori categories of analysis (Human Centred Mindset, Ethics of AI, AI foundations and Applications; AI pedagogy; AI for Professional Development) grounded in agreed-upon principles for ethical, humanising use of AI technologies in educational contexts.

By examining the nuanced ethical dimensions of Generative AI in teacher education, we hope to contribute to the development of a cohort of educators who are prepared to navigate the post-digital landscape armed with critical lenses and care.

References

Bearman, M., & Ajjawi, R. (2023). Learning to work with the black box: Pedagogy for a world with artificial intelligence. British Journal of Educational Technology, 54(5), 1160–1173. https://doi.org/10.1111/bjet.13337

Fawns, T. (2022). An entangled pedagogy: Looking beyond the pedagogy—technology dichotomy. Postdigital Science and Education, 4(3), 711–728. https://doi.org/10.1007/s42438-022-00302-7

Mishra, P., Warr, M., & Islam, R. (2023). TPACK in the age of ChatGPT and Generative AI. Journal of Digital Learning in Teacher Education, 39(4), 235–251. https://doi.org/10.1080/21532974.2023.2247480

UNESCO (2024). AI competency framework for teachers. https://unesdoc.unesco.org/ark:/48223/pf0000391104



A Mixed Methods Study Assessing How GenAI Generated Feedback Compares to Tutor Feedback on Capstone 3rd Level Research Projects

Francis Ward, Pia O'Farrell, Ernesto Panadero, Orna Farrell

Dublin City University, Ireland

This paper explores the potential of using Generative AI (GenAI) to support tutors in providing feedback on capstone research projects in education. Tutors currently limit feedback to the first three chapters due to workload constraints, leading to student dissatisfaction and concerns about the quality of the final chapters. To address these issues, this study examines how the feedback process can be enhanced by integrating GenAI, allowing tutors to prioritise feedback for analytical sections which require higher-order thinking.

Despite GenAI’s growing capabilities, university staff have been hesitant to adopt it productively due to a risk-averse approach, driven by concerns about penalties and ethical challenges (Ross, 2024). This limits how staff use GenAI for routine tasks, confining its application to teaching about AI rather than utilising it as a teaching tool. As students increasingly embrace AI technologies, educators need to integrate AI into teaching practices to prepare them for the evolving world of work (Cathcart in Ross, 2024).

This study investigates whether GenAI can generate high-quality feedback on procedural sections, enabling tutors to concentrate on providing more complex analytical feedback. It is hypothesised that while GenAI will excel in providing feedback on procedural and presentation tasks, human judgement will remain essential for complex analysis and interpretation.

This research builds on studies investigating “teacher-facing” applications of GenAI (Baker et al., 2019; Zawacki-Richter et al., 2019), emphasising the need for Higher Education Institutions (HEIs) to understand and integrate generative AI tools while preserving academic integrity (QQI, 2023). Existing literature highlights the central role of rubrics in GenAI-assisted assessment (e.g., Li et al., 2024) and suggests that although GenAI offers scalability in marking, it may not fully replace human judgement in assessing higher-order learning (Wetzler et al., 2024).

Employing a mixed methods approach with 182 students and 30 tutors, tutors will assess student work using a rubric, and selected submissions will be input into GenAI for marks and feedback generation. Tutors will then compare the AI-generated feedback with their own to evaluate its accuracy and usefulness. Data collection includes interviews with tutors to gather their perceptions of using GenAI, and quantitative analysis will compare rubric usage and marks awarded by tutors versus GenAI.

Although challenges such as tutor participation time, technical skills, and ethical considerations are anticipated, the study offers significant opportunities. By integrating GenAI into the assessment of procedural writing, tutors are enabled to focus on providing analytical feedback, potentially improving student performance. Furthermore, the study aligns with policy discussions emphasising ethical AI integration, ensuring that human oversight complements AI technologies in education (QQI, 2023).

In conclusion, this paper aims to demonstrate the potential of GenAI to enhance feedback quality, improve student outcomes, support tutor development, and contribute to educational policy and practice. By addressing current challenges in the large-scale provision of feedback, this research looks forward to the development of a more effective, technologically integrated educational landscape.

References

Baker, T. (2019). Educ-AI-tion Rebooted? Exploring the future of artificial intelligence in schools and colleges. Nesta Foundation. https://media.nesta.org.uk/documents/Future_of_AI_and_education_v5_WEB.pdf.

Li, J., et al. (2024). AI-assisted marking: Functionality and limitations of ChatGPT in written assessment evaluation. Australasian Journal of Educational Technology. https://doi.org/10.14742/ajet.9463

QQI (2023). Advice on artificial intelligence in education and training. https://www.qqi.ie/news/advice-on-artificial-intelligence-in-education-and-training

Ross, J. (2024). Higher education staff missing opportunity to use generative AI tools. Times Higher Education. https://www.timeshighereducation.com/news/ai-potential-squandered-universities-risk-focused-approach

Wetzler, E. L., et al. (2024). Grading the Graders: Comparing Generative AI and Human Assessment in Essay Evaluation. Teaching of Psychology. https://doi.org/10.1177/00986283241282696

Zawacki-Richter, O., et al. (2019). Systematic review of research on artificial intelligence applications in higher education. International Journal of Educational Technology in Higher Education, 16(1), 39. https://doi.org/10.1186/s41239-019-0171-0



From Mastery to Networks: Sociotechnical AI Systems and Human Agency

Ana Mouta1, Ana María Pinto-Llorente2, Eva María Torrecilla-Sánchez3

1Faculty of Education, University of Salamanca; 2Faculty of Education, University of Salamanca; 3Faculty of Education, University of Salamanca

From Mastery to Networks: Sociotechnical AI Systems and Human Agency

Abstract

This research explores the relationship between human agency and the use of sociotechnical AI technologies in educational contexts. It critically analyses how debates around ethics and notions of privacy have overshadowed key considerations of intimacy, secrecy, determination, and agency. Although AI applications in education are often promoted as transformative tools for enhancing various aspects of the learning experience, the empirical evidence supporting these claims remains limited or questionable – particularly when automation prioritises efficiency at the expense of agency on multiple levels. In this context, the study focuses specifically on how these technologies shape agency dynamics across the subjective, intersubjective, and collective dimensions as perceived by educators. Departing from traditional notions of agency as mastery or control, it adopts a framework of distributed agency, where action emerges from relational networks rather than being confined to individual entities.

To explore how educational actors collectively conceive the particularities of AI, especially concerning automation and its implications for human agency, this study employs a qualitative analysis of focus group discussions. The cohort consisted of 19 educators (10 males and 9 females) from five countries. The participants were selected based on their experience in K-12 teacher education and their proficiency in Spanish, the primary language of the research centre. Convenience sampling was initially used, followed by snowball sampling to further increase cultural diversity. While the sample size is relatively small, it is diverse in terms of cultural and professional backgrounds (e.g., associate professors, researchers, former K-12 teachers now holding governance roles in the Ministry of Education), ensuring a range of perspectives on the use of AI in education. Ethical approval for this study was obtained from the University of Salamanca, ensuring the protection of participants' rights and confidentiality.

The discussions addressed the use of AI applications in educational settings, with a specific focus on participants’ spontaneous reflections about different aspects of agency. The methodology enabled a comprehensive exploration of teachers' concerns. At the subjective level, teachers explored their role in fostering critical reasoning, decision-making, and moral development in students, acknowledging the potential negative impact of AI systems that provide overly rapid feedback, which may diminish students' sense of control, motivation, and emotional involvement. Teachers also express concerns about the impact of AI on individuality, subjectivity, and self-regulation, and question the relevance of AI evaluations in fostering self-reflectiveness and lasting academic engagement. At the intersubjective level, educators stress the importance of their authority and role as scaffolding figures. However, they largely overlook the impact of AI on peer relationships and the role of parents in supporting learning. At the collective level, teachers advocate for the preservation of schools as democratic spaces that foster creativity, plurality, and emotional synchrony, while warning against the threat posed by AI-enhanced technologies that may undermine critical pedagogy and collective educational experiences. Generally, teachers highlight the importance of integrating AI in a way that preserves meaningful educational experiences and supports organic, collaborative learning.

These findings reveal that while educators are attuned to their students’ subjective sense of agency – encompassing how they foresee, plan, self-regulate, and self-reflect – they often overlook the broader intersubjective and systemic implications of AI systems. This oversight risks fostering narratives that envision a diminished role for teachers in shaping their educational ecosystem cultures. By foregrounding these relational dynamics, the study highlights how AI’s mediating role complicates the maintenance of a sense of belonging in collective settings and engagement with moral judgment. It also underscores the need for further research into the long-term effects of AI on agency dynamics across diverse educational experiences. Ultimately, the study aims to advance the understanding of actants – entities that contribute to distributed agency – before they consolidate into dominant actors within educational networks. This endeavour calls for a collective responsibility among all educational stakeholders to deeply engage with the processes through which schools meaningfully fulfil their roles in qualification, subjectivation, and socialisation, nurturing these as shared and participatory actions.

References

Bandura, A. (2006). Toward a psychology of human agency. Perspectives on Psychological Science, 1, 164–180. https://doi.org/ 10.1111/j.1745-6916.2006.00011.x

Biesta, G., & Tedder, M. (2006). How is agency possible? Towards an ecological understanding of agency-as-achievement (Working Paper Five). Learning Lives: Learning, Identity, and Agency in the Life Course. Teaching and Learning Research Programme.

Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford University Press.

Latour, B. (2014). Agency at the time of the Anthropocene. New Literary History, 45(1), 1-18. https://doi.org/10.1353/nlh.2014.0003

Moore, J. W. (2016). What is the sense of agency and why does it matter? Frontiers in Psychology, 7: 1272. https://doi.org/10.3389/fpsyg.2016.01272

Pasquinelli, M. (2020). The eye of the master: A social history of artificial intelligence. Verso.

Priestley, M., Biesta, G., & Robinson, S. (2015). Teacher agency: An ecological approach. Bloomsbury



Integrating Design Thinking and Technology-Enhanced Learning in K-12 to Foster Socio-Scientific Understanding

Ahmed Mohammed1, Rafael Zerega1, Johanna Valender1, Nuno Otero2, Marcelo Milrad1

1Linnaeus University, Sweden; 2University of Greenwich

Background

Technology-enhanced learning (TEL) environments, driven by emerging technologies, can transform education by enhancing teaching and equipping students with essential 21st-century skills (Ragab et al., 2024). However, their effectiveness relies on addressing students' cognitive and emotional needs while supporting educators adapting to innovative pedagogies (Peterson et al., 2018). Student-centered approaches, while valuable, frequently neglect co-design processes and the need for professional development for teachers to integrate TEL effectively (Rajaram, 2023). Building on the Horizon Europe-funded Exten.(D.T.)² project, this study investigates how to integrate Design Thinking (DT) with interactive tools, such as ChoiCo, a gamified web-based platform that can foster creativity to address global challenges (Milrad et al., 2023). Additionally, the study explores the scalability of DT and TEL across diverse contexts, highlighting the need for open-ended, structured, feedback-driven activities to enhance teaching, foster student curiosity and creativity, and address critical Socio-Scientific Issues (SSI). Ultimately, the study presents an alternative pedagogical approach to addressing SSI issues while preparing students to thrive and contribute to a data-driven, interconnected world (Possaghi et al., 2024)

Study Design and Methodology

Two interventions were conducted at a school in Sweden during the Spring and Fall terms of 2024. The first involved 75 students, and the second engaged 4 mathematics teachers in workshops integrating DT, game design, and prototyping. Using a Design-Based Research approach, the interventions followed iterative design, testing, and refinement cycles, guided by continuous feedback (Jetnikoff, A., 2015). The first intervention included pollinator conservation activities, where students enhanced their problem-solving, programming, and analytical skills through ChoiCo game design. The second focused on equipping educators to develop curriculum-aligned games addressing Socio-Scientific Issues (SSI) like environmental hazards and global warming.

Discussion of Findings with Conclusion

The outcomes of the workshops highlighted both the potential and challenges of integrating DT with tools like ChoiCo. While students demonstrated strong engagement in design, problem-solving, and gameplay activities, they encountered difficulties in understanding game variables and DT phases, and technical issues with the platform. These findings underscore the importance of co-designing tools with students and educators to improve usability and foster creativity. Future actions include developing a real-time analytics dashboard to track progress, creating personalized learning pathways, and expanding the integration of 21st-century SSI to enhance multidisciplinary learning.
References

Jetnikoff, A. (2015). Design-based research methodology for teaching with technology in English. Practical Literacy: The Early & Primary Years, 20(1), 23–26. Retrieved from https://www.eric.ed.gov/?id=EJ1072345

Milrad, M., Herodotou, C., Grizioti, M., Lincke, A., Girvan, C., Papavlasopoulou, S., Shrestha, S., & Zhang, F. (2024). Combining design thinking with emerging technologies in K-12 education. In Proceedings of the 23rd Annual ACM Interaction Design and Children Conference.

Peterson, A., Dumont, H., Lafuente, M., & Law, N. (2018). Understanding innovative pedagogies: Key themes to analyse new approaches to teaching and learning. OECD Education Working Papers, No. 172. OECD Publishing. https://doi.org/10.1787/2adf8e21-en

Possaghi, I., Zhang, F., Sharma, K., & Papavlasopoulou, S. (2024). Design thinking activities for K-12 students: Multi-modal data explanations on coding performance. In Proceedings of the 23rd Annual ACM Interaction Design and Children Conference (pp. 290–306). https://doi.org/10.1145/3628516.3655786

Ragab, K., Fernandez-Ahumada, E., & Martínez-Jiménez, E. (2024). Engaging minds—Unlocking potential with interactive technology in enhancing students’ engagement in STEM education. In Interdisciplinary Approaches for Educators’ and Learners’ Well-being: Transforming Education for Sustainable Development (pp. 53–66).

Rajaram, K. (2023). Future of learning: Teaching and learning strategies. In Learning Intelligence: Innovative and Digital Transformative Learning Strategies: Cultural and Social Engineering Perspectives (pp. 3–53). Springer Nature Singapore.



Re-configuring Optimisation Paths in Data-informed Learning Analytics

Johanna Velander

Linnaeus University, Sweden

Designing and shaping data-driven educational technologies often starts with accepting the validity of the underpinning values that have shaped and promoted common goals and visions for a shared future with technology (Rahm, 2023). Technology promises to optimise processes, making them more efficient and effective. Optimisation therefore often means accuracy and efficiency since these yield maximum profit in data-driven contexts. Since competitiveness and profit are highly valued in society today the risk of “falling behind” in the race for adaptation to new technological developments such as AI (although it is not new at all) can be contradicted by identifying what we risk losing if we adopt and adapt to technology deterministic narratives without questioning the goals we are so afraid to miss.

Acknowledging this environment, I want to challenge the driving forces informing current data-driven practices in the educational context of learning analytics by imagining and co-constructing alternative LA solutions with stakeholders. The complexity of ML algorithms makes it difficult to engage with these explicitly, therefore this study would engage with these algorithms by “looking around, rather than inside, increasingly opaque and unknowable black boxes” (Perrotta & Selwyn, 2020. p. 254). More specifically, I take inspiration from Prinsloo's suggested broad framework for “engaging with the potential but also the curtailing dangers of algorithmic decision-making in education” (Prinsloo., 2017). The study design would also reveal tensions between stakeholder values regarding data-driven LA and the currently dominating individual control model (Solove & Hartzog, 2024). This study takes its departure from a master's thesis project already conducted (Velander, 2021) where students at a university in Sweden were asked to reflect on data-driven practices in general and at their institutions (Velander et al., 2021). A dashboard visualising data the current Learning Management System Moodle collected about them was developed and deployed by the author (at that point an MSc student) on the university servers. Students attending several courses had access to the dashboard1 for two weeks, here they could find visualisations of their data logged by the system. Insights from a survey distributed before the dashboard intervention revealed a low awareness of and a high acceptance towards data collection especially at the university. However, after having had access to the dashboard and seen the data used in different contexts many issues were highlighted that were specific to how the data was used, who could access it and what conclusions might be drawn from them. Further, results confirmed what has been noted by previous research: the powerlessness of the data subjects (students) to keep themselves informed, up to date and most of all being heard and contributing to some sort of change to current practices (Pangrazio & Selwyn, 2019; Solove & Hartzog, 2024).

Pangrazio, L., & Selwyn, N. (2019). ‘Personal data literacies’: A critical literacies approach to enhancing understandings of personal digital data. New Media & Society, 21(2), 419–437. https://doi.org/10.1177/1461444818799523

Rahm, L. (2023). Education, automation and AI: a genealogy of alternative futures. Learning, Media and Technology, 48(1), 6-24.

Solove, D. J., & Hartzog, W. (2024). Kafka in the Age of AI and the Futility of Privacy as Control (SSRN Scholarly Paper 4685553). Social Science Research Network. https://doi.org/10.2139/ssrn.4685553

Velander, J. (2020). Student awareness, perceptions and values in relation to their university data.

Velander, J., Otero, N., Pargman, T. C., & Milrad, M. (2021). “We Know What You Were Doing” Understanding Learners’ Concerns Regarding Learning Analytics and Visualization Practices in Learning Management Systems. In Visualizations and Dashboards for Learning Analytics (pp. 323-347). Cham: Springer International Publishing.

 
4:00pm - 5:00pmKeynote address with Professor Punya Mishra
Virtual location: Zoom main room
Session Chair: R Lowney
Panel response with Dr Alison Egan, Mr Gavin Clinch, Ms Ella Brady and Ms Irina Grigorescu
 

The Mirror and the Machine: Navigating the Metaphors of Generative AI

Punya Mishra

Arizona State University, United States of America

As generative AI systems grow increasingly sophisticated, we find ourselves grappling for ways to understand and interact with this technology. We often resort to metaphors but these metaphors, from the mechanistic to the mythical, in turn shape our perceptions and decisions in profound and often hidden ways. However, with this particular technology we seem to draw metaphors from our own minds, anthropomorphizing AI in an attempt to make sense of it. This talk explores the spectrum of AI metaphors, examines their implications in contexts such as education, and confronts the fundamental opacity of both human and artificial cognition. Ultimately, it argues for the cultivation of "technological emotional intelligence" – a reflective, intentional approach to the metaphors we use, so that we may make sense of this bizarre world we seem to be entering.

 

 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: Education after the algorithm
Conference Software: ConfTool Pro 2.6.153
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany