Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 10th May 2025, 03:11:00am IST

 
Only Sessions at Location/Venue 
 
 
Session Overview
Session
Morning parallel session 2
Time:
Friday, 21/Feb/2025:
10:00am - 11:25am

Session Chair: Steve Welsh
Location: F215


Show help for 'Increase or decrease the abstract text size'
Presentations
10:00am - 10:20am

After AI? Critical Digital Pedagogies of Place

Bonnie Stewart

University of Windsor, Canada

This session will outline two approaches to contemporary digital education, explore how they are grounded in differing values and visions, and overview a research project on place-based digital pedagogies as a model for post-AI education.

If the concept of a post-AI world is intended to parallel that of the post-digital world (52 group, 2009), then post-AI is not a world without Generative AI and other algorithmic tools, but one in which AI is pervasive. This is ‘post’ as omnipresence, wherein a signifier of change becomes itself ubiquitous, embedded across systems. This session unpacks and traces the post-AI educational imaginary – which is arguably upon us – and contrasts its values and trajectory with those of an alternate sociotechnical construct, that of participatory digital practice and pedagogy.

Participatory digital practice has its roots in relational practices that utilize the web to engage open and networked contributions to information abundance. This interactive Web 2.0 practice dominated the first ten to fifteen years of the 21st century, and shaped critical digital pedagogy as a participatory and often democratically-informed approach to learning. But over the past decade, the platforms on which participatory digital practices depend have been enclosed by data-extractive and increasingly automated corporate entities. The participatory practices of Web 2.0 have thus been displaced by Web 3.0 and the hype surrounding Generative AI, shifting digital practice away from contribution and co-creation. Because the ‘innovation’ lens of our attention economy emphasizes the capital potential of technologies decoupled from their affordances, the Web 3.0 post-AI imaginary is also largely a ‘black box’ (LaTour, 1987) whose algorithmic structures remain obscured.

This trend away from participatory digital practice is amplified by cultural shifts. The promise that poverty and other social ills can be ‘solved’ with technology, framed as technosolutionism (Morozov, 2013) or access doctrine (Greene, 2021), prioritizes a skills focus that aligns with capital interests rather than supporting social structures or criticality. Solutionist thinking underpins much of the hype about GenAI in education, and leads to decision-makers acting on behalf of capital rather than students. In the wake of the COVID19 emergency online pivot, learners themselves often view education as a task-oriented process. These intersecting trends toward an instrumentalized and algorithmic educational imaginary reinforce AI fantasies about futures decoupled from collective human cooperation. If we abandon digital pedagogy’s participatory roots in favour of the block box of the algorithm, we risk outsourcing the entire learning process away from human cognition, creativity, and connection.

As an alternative to the Web 3.0 version of a post-AI world, this session will outline place-based pedagogies as active, situated knowledges (Haraway, 1988) that can support digital participatory practices and critical pedagogical approaches. Emphasizing multiliteracies, agency, and relationship-building over solutionist skill acquisition, place-based participatory pedagogies are sociomaterial practices shaped by the specifics of built environments and digital spaces, geographies, cultures, personal attitudes, identities, and interests (Gravett & Ajjawi, 2022). The session will overview a 2024-2025 research project with the University of Highlands and Islands in Scotland, outlining how participation, opportunities for local and global contribution, and the enlistment of educators and learners in the firsthand experience and shaping of local life (Gruenewald, 2003) can form a basis for refusing full transition from Web 2.0 to Web 3.0. The session will emphasize the critical importance of preserving participatory learning experiences and connection as counterpoint to automated outputs, and underscore the role of educators in creating agential choices about which post-AI world we reinforce and validate.
References:
52group (2009). Preparing for the postdigital era. https://docs.google.com/document/d/1TkCUCisefPgrcG317_hZa4PwZoQ8m7rL5AJF6PazHHQ/preview?pli=1.

Gravett, K., & Ajjawi, R. (2021). Belonging as situated practice. Studies in Higher Education, 47(7), 1386–1396. https://doi.org/10.1080/03075079.2021.1894118

Greene, D. (2021). The promise of access: Technology, inequality, and the political economy of hope. MIT Press,

Gruenewald, D. A. (2003). Foundations of place: A multidisciplinary framework for place-conscious education. American Educational Research Journal, 40(3), 619-654. https://doi.org/10.3102/00028312040003619

Haraway, D. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist Studies, 14(3), 575–599. https://doi.org/10.2307/3178066

Latour, Bruno (1987). Science in action: How to follow scientists and engineers through society. Cambridge: Harvard University Press.

Morozov, E. (2013). To save everything, click here: The folly of technological solutionism. PublicAffairs.



10:20am - 10:40am

Designing Equitable Assessment Futures: Lessons From Students' Use Of Generative AI

Sukaina Walji, Francois Cilliers, Cheng-Wen Huang, Soraya Lester, Sanet Steyn

University of Cape Town, South Africa

This study explores the motivations behind university students’ engagement with generative AI (genAI) tools for assessment support, offering insights into their behaviours and decision-making processes. Conducted at the University of Cape Town, the research draws on three focus groups comprising 18 undergraduate students from diverse faculties and programmes to explore whether, how, and why students use genAI to support their assessment practices.

Findings revealed a spectrum of behaviours, from reliance on genAI for translating complex disciplinary language and task analysis to summarising content, enhancing assignment quality, and improving efficiency in information retrieval. Some students described how genAI tools enabled them to take greater ownership of their academic work by guiding ideation, improving clarity, and providing a sense of control over challenging tasks. However, non-usage was also noted, influenced by concerns about plagiarism accusations, institutional guidance, and the perceived irrelevance of genAI for certain tasks. These varied behaviours point to a continuum of student agency, with some students viewing genAI as a tool to enhance learning autonomy, while others felt constrained by the risks and limitations of its use.

The research has drawn on the COM-B framework (Michie et al., 2011) to better understand these behaviours, emphasising the interplay of Capability (e.g., students' AI literacy), Opportunity (e.g., accessibility of tools and societal norms), and Motivation (e.g., perceived utility and ethical considerations).

The study highlights critical implications for higher education practice, particularly in reviewing and shaping assessment practices to account for genAI's evolving role. These insights are especially pertinent in the context of extreme inequalities that characterise the South African higher education sector, where students' opportunities to engage with genAI may differ significantly. Short-term recommendations include fostering AI literacy and co-creating equitable policies for genAI usage, striving towards clarity and consistency to mitigate disparities. Medium- to long-term strategies could involve redefining academic integrity norms and standards, as well as addressing and reconceptualising blurred boundaries between human and AI contributions.

By bridging behavioural insights with practical interventions, this research contributes to the discourse on the possibilities and challenges of ethical, equitable and transformative uses of generative AI in education, while highlighting the importance of supporting students’ agency in navigating these tools.

References

Michie, S., van Stralen, M. M., & West, R. (2011). The behaviour change wheel: A new method for characterising and designing behaviour change interventions. Implementation Science, 6(42). https://doi.org/10.1186/1748-5908-6-42



10:40am - 11:00am

GenAI Integration Challenges: Learner Expectations and Effects on Trust

Dónal Mulligan

Dublin City University, Ireland

While initial responses across academia to the seemingly sudden emergence of highly capable chat-interfaced AI in the post-GPT period have focused largely on the implications of such techology for plagiarism - and an implied lack of trust in how learners might adopt and adapt to these tools - this research presentation inverts the teacher-learner direction to investigate how emerging AI has affected student trust of academics.

A research project by the author in mid-2024 (Mulligan, 2024 forthcoming), which focussed on tracking attitudes among media students in Irish universities to emerging AI technology and tools, found that regardless of actual use of AI by teachers in the three insitutions studied, students suspected that the existence of the technology implied that it must already be in use by their lecturers. This finding was further confirmed in an on-going wider study, where undergraduate student focus groups in a national set of institutions repeated suspicions that AI is being surrepticiously used by teaching staff at the same time as being banned or remaining undiscussed for student use. This submisison will present the findings of this wider study, developing insights from students themselves on how their trust in academic fairness and the conduct and quality of their teaching staff is undermined by a pervasive perception that AI tools are being hypocritically used in the development of content or the assessment of work, while remaining unavailable to students.

The research provides a novel and timely set of insights on learner attitudes and expectations and provides imperatives for the continuing development of Teaching & Learning practices at a time of considerable upheaval in the wake of GenAI. Alongside analysis of these negative effects on academic process reputation, the focus group findings provide insights on the state of learners' critical engagment with AI shortcomings, their perception of the relevance of AI tools to graduate skill profile and career plans, and their sources of information on emerging GenAI. Complementing existing studies of student experience and perceptions in other geolocales (e.g. da Silva et al. 2024; Mireku, Kweku, & Abenda 2024), the study adds Irish undergraduate student perspectives, drawn from several disciplines and regions.

References:

da Silva, Monica & Ferro, Mariza & Mourão, Erica & Seixas, Elaine & Viterbo, Jose & Salgado, Luciana. (2024). Ethics and AI in Higher Education: A Study on Students’ Perceptions. 10.1007/978-3-031-54235-0_14.

Mireku, Martin & Kweku, Alfred & Abenba, Daniel. (2024). HIGHER EDUCATION STUDENTS' PERCEPTION ON THE ETHICAL USE OF GENERATIVE AI: A STUDY AT THE UNIVERSITY OF CAPE COAST.. 10.13140/RG.2.2.10274.64967.

Mulligan, D (2024, forthcoming) "Hypocritical much?" - Attitudes to Generative AI Tools in an Irish Media Education Context. Teaching Media Quarterly



11:00am - 11:15am

Token Offerings: Contemplating the Cost of our Invisible Transactions within AIED Environments

Steve Welsh

Dublin City University TEU, Ireland

When educators and institutions embed AI-driven tools within our learning environments, what is the true cost of the contracts we’re signing on behalf of our learners (Saenko, 2023)? When we’ve made the complex transactions between prompt, calculation, and output invisible, what are we obscuring (Blum, 2019)? While the developers of large language models designate ‘tokens’ as the units of quantification by which characters, phonemes, and phrases are consumed and produced, this paper asks what metaphors (Weller, 2022) might be more appropriate as we hurl towards a world made too hot by the sum of all our clicks. Perhaps the petrochemical metaphor is more apt today than when Clive Humby first declared “data is the new big oil!” almost two decades ago (Holmes & Tuomi, 2022). Or perhaps we can look to other symbolic taxonomies to illustrate the cost of our consumption. Consider, for instance, if during the time our search query or prompt results were being formulated, the user were to visualise the incremental melting of a glacier in Greenland, the impact of gale force storm winds striking a family home in North Carolina, the sun striking a barren field in South Africa, a tree succumbing to wildfires in Argentina, or the gradual bleaching of a coral reef off the Australian coast. Could such in situ interventions serve to foster a greater sense of intentionality or even serve to restrain the often arbitrary exercise of AI consumption in educational environments?

This paper seeks to rematerialise the dematerialised within AIED, or to at least make it legible, as we increasingly marry our teaching and learning practices to these energy-intensive technologies. Reflecting on his own practice as a Learning Technologist supporting the adoption of AI technologies, the researcher seeks ways to embed a more tangible awareness, or visibility, of energy consumption within our digital learning environments, and to propose some methods by which we can factor energy consumption into our learning design, with the aim of adapting practices of degrowth (Selwyn, 2023).

References

Blum, A. (2019). Tubes : a journey to the center of the internet. Ecco.

Holmes, W., & Tuomi, I. (2022). State of the art and practice in AI in education. European Journal of Education, 57, 542–570.

Saenko, K. (2023). A Computer Scientist Breaks Down Generative AI’s Hefty Carbon Footprint. Scientific American.

Selwyn, N. (2023). Lessons to be learnt? Education, techno-solutionism and sustainable development. in Sætra, H. (ed). Techno-solutionism and sustainable development. Routledge

Weller, M. 2022. Metaphors of Ed Tech. Athabasca University Press.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: Education after the algorithm
Conference Software: ConfTool Pro 2.6.153
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany