Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Please note that all times are shown in the time zone of the conference. The current conference time is: 10th May 2025, 03:24:36am IST
|
Session Overview | |
Location: F215 |
Date: Friday, 21/Feb/2025 | |
10:00am - 11:25am | Morning parallel session 2 Location: F215 Session Chair: Steve Welsh |
|
10:00am - 10:20am
After AI? Critical Digital Pedagogies of Place University of Windsor, Canada This session will outline two approaches to contemporary digital education, explore how they are grounded in differing values and visions, and overview a research project on place-based digital pedagogies as a model for post-AI education. If the concept of a post-AI world is intended to parallel that of the post-digital world (52 group, 2009), then post-AI is not a world without Generative AI and other algorithmic tools, but one in which AI is pervasive. This is ‘post’ as omnipresence, wherein a signifier of change becomes itself ubiquitous, embedded across systems. This session unpacks and traces the post-AI educational imaginary – which is arguably upon us – and contrasts its values and trajectory with those of an alternate sociotechnical construct, that of participatory digital practice and pedagogy. Participatory digital practice has its roots in relational practices that utilize the web to engage open and networked contributions to information abundance. This interactive Web 2.0 practice dominated the first ten to fifteen years of the 21st century, and shaped critical digital pedagogy as a participatory and often democratically-informed approach to learning. But over the past decade, the platforms on which participatory digital practices depend have been enclosed by data-extractive and increasingly automated corporate entities. The participatory practices of Web 2.0 have thus been displaced by Web 3.0 and the hype surrounding Generative AI, shifting digital practice away from contribution and co-creation. Because the ‘innovation’ lens of our attention economy emphasizes the capital potential of technologies decoupled from their affordances, the Web 3.0 post-AI imaginary is also largely a ‘black box’ (LaTour, 1987) whose algorithmic structures remain obscured. This trend away from participatory digital practice is amplified by cultural shifts. The promise that poverty and other social ills can be ‘solved’ with technology, framed as technosolutionism (Morozov, 2013) or access doctrine (Greene, 2021), prioritizes a skills focus that aligns with capital interests rather than supporting social structures or criticality. Solutionist thinking underpins much of the hype about GenAI in education, and leads to decision-makers acting on behalf of capital rather than students. In the wake of the COVID19 emergency online pivot, learners themselves often view education as a task-oriented process. These intersecting trends toward an instrumentalized and algorithmic educational imaginary reinforce AI fantasies about futures decoupled from collective human cooperation. If we abandon digital pedagogy’s participatory roots in favour of the block box of the algorithm, we risk outsourcing the entire learning process away from human cognition, creativity, and connection. As an alternative to the Web 3.0 version of a post-AI world, this session will outline place-based pedagogies as active, situated knowledges (Haraway, 1988) that can support digital participatory practices and critical pedagogical approaches. Emphasizing multiliteracies, agency, and relationship-building over solutionist skill acquisition, place-based participatory pedagogies are sociomaterial practices shaped by the specifics of built environments and digital spaces, geographies, cultures, personal attitudes, identities, and interests (Gravett & Ajjawi, 2022). The session will overview a 2024-2025 research project with the University of Highlands and Islands in Scotland, outlining how participation, opportunities for local and global contribution, and the enlistment of educators and learners in the firsthand experience and shaping of local life (Gruenewald, 2003) can form a basis for refusing full transition from Web 2.0 to Web 3.0. The session will emphasize the critical importance of preserving participatory learning experiences and connection as counterpoint to automated outputs, and underscore the role of educators in creating agential choices about which post-AI world we reinforce and validate. Gravett, K., & Ajjawi, R. (2021). Belonging as situated practice. Studies in Higher Education, 47(7), 1386–1396. https://doi.org/10.1080/03075079.2021.1894118 Greene, D. (2021). The promise of access: Technology, inequality, and the political economy of hope. MIT Press, Gruenewald, D. A. (2003). Foundations of place: A multidisciplinary framework for place-conscious education. American Educational Research Journal, 40(3), 619-654. https://doi.org/10.3102/00028312040003619 Haraway, D. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist Studies, 14(3), 575–599. https://doi.org/10.2307/3178066 Latour, Bruno (1987). Science in action: How to follow scientists and engineers through society. Cambridge: Harvard University Press. Morozov, E. (2013). To save everything, click here: The folly of technological solutionism. PublicAffairs. 10:20am - 10:40am
Designing Equitable Assessment Futures: Lessons From Students' Use Of Generative AI University of Cape Town, South Africa This study explores the motivations behind university students’ engagement with generative AI (genAI) tools for assessment support, offering insights into their behaviours and decision-making processes. Conducted at the University of Cape Town, the research draws on three focus groups comprising 18 undergraduate students from diverse faculties and programmes to explore whether, how, and why students use genAI to support their assessment practices. Findings revealed a spectrum of behaviours, from reliance on genAI for translating complex disciplinary language and task analysis to summarising content, enhancing assignment quality, and improving efficiency in information retrieval. Some students described how genAI tools enabled them to take greater ownership of their academic work by guiding ideation, improving clarity, and providing a sense of control over challenging tasks. However, non-usage was also noted, influenced by concerns about plagiarism accusations, institutional guidance, and the perceived irrelevance of genAI for certain tasks. These varied behaviours point to a continuum of student agency, with some students viewing genAI as a tool to enhance learning autonomy, while others felt constrained by the risks and limitations of its use. The research has drawn on the COM-B framework (Michie et al., 2011) to better understand these behaviours, emphasising the interplay of Capability (e.g., students' AI literacy), Opportunity (e.g., accessibility of tools and societal norms), and Motivation (e.g., perceived utility and ethical considerations). The study highlights critical implications for higher education practice, particularly in reviewing and shaping assessment practices to account for genAI's evolving role. These insights are especially pertinent in the context of extreme inequalities that characterise the South African higher education sector, where students' opportunities to engage with genAI may differ significantly. Short-term recommendations include fostering AI literacy and co-creating equitable policies for genAI usage, striving towards clarity and consistency to mitigate disparities. Medium- to long-term strategies could involve redefining academic integrity norms and standards, as well as addressing and reconceptualising blurred boundaries between human and AI contributions. By bridging behavioural insights with practical interventions, this research contributes to the discourse on the possibilities and challenges of ethical, equitable and transformative uses of generative AI in education, while highlighting the importance of supporting students’ agency in navigating these tools. References Michie, S., van Stralen, M. M., & West, R. (2011). The behaviour change wheel: A new method for characterising and designing behaviour change interventions. Implementation Science, 6(42). https://doi.org/10.1186/1748-5908-6-42 10:40am - 11:00am
GenAI Integration Challenges: Learner Expectations and Effects on Trust Dublin City University, Ireland While initial responses across academia to the seemingly sudden emergence of highly capable chat-interfaced AI in the post-GPT period have focused largely on the implications of such techology for plagiarism - and an implied lack of trust in how learners might adopt and adapt to these tools - this research presentation inverts the teacher-learner direction to investigate how emerging AI has affected student trust of academics. A research project by the author in mid-2024 (Mulligan, 2024 forthcoming), which focussed on tracking attitudes among media students in Irish universities to emerging AI technology and tools, found that regardless of actual use of AI by teachers in the three insitutions studied, students suspected that the existence of the technology implied that it must already be in use by their lecturers. This finding was further confirmed in an on-going wider study, where undergraduate student focus groups in a national set of institutions repeated suspicions that AI is being surrepticiously used by teaching staff at the same time as being banned or remaining undiscussed for student use. This submisison will present the findings of this wider study, developing insights from students themselves on how their trust in academic fairness and the conduct and quality of their teaching staff is undermined by a pervasive perception that AI tools are being hypocritically used in the development of content or the assessment of work, while remaining unavailable to students. The research provides a novel and timely set of insights on learner attitudes and expectations and provides imperatives for the continuing development of Teaching & Learning practices at a time of considerable upheaval in the wake of GenAI. Alongside analysis of these negative effects on academic process reputation, the focus group findings provide insights on the state of learners' critical engagment with AI shortcomings, their perception of the relevance of AI tools to graduate skill profile and career plans, and their sources of information on emerging GenAI. Complementing existing studies of student experience and perceptions in other geolocales (e.g. da Silva et al. 2024; Mireku, Kweku, & Abenda 2024), the study adds Irish undergraduate student perspectives, drawn from several disciplines and regions. References: da Silva, Monica & Ferro, Mariza & Mourão, Erica & Seixas, Elaine & Viterbo, Jose & Salgado, Luciana. (2024). Ethics and AI in Higher Education: A Study on Students’ Perceptions. 10.1007/978-3-031-54235-0_14. Mireku, Martin & Kweku, Alfred & Abenba, Daniel. (2024). HIGHER EDUCATION STUDENTS' PERCEPTION ON THE ETHICAL USE OF GENERATIVE AI: A STUDY AT THE UNIVERSITY OF CAPE COAST.. 10.13140/RG.2.2.10274.64967. Mulligan, D (2024, forthcoming) "Hypocritical much?" - Attitudes to Generative AI Tools in an Irish Media Education Context. Teaching Media Quarterly 11:00am - 11:15am
Token Offerings: Contemplating the Cost of our Invisible Transactions within AIED Environments Dublin City University TEU, Ireland When educators and institutions embed AI-driven tools within our learning environments, what is the true cost of the contracts we’re signing on behalf of our learners (Saenko, 2023)? When we’ve made the complex transactions between prompt, calculation, and output invisible, what are we obscuring (Blum, 2019)? While the developers of large language models designate ‘tokens’ as the units of quantification by which characters, phonemes, and phrases are consumed and produced, this paper asks what metaphors (Weller, 2022) might be more appropriate as we hurl towards a world made too hot by the sum of all our clicks. Perhaps the petrochemical metaphor is more apt today than when Clive Humby first declared “data is the new big oil!” almost two decades ago (Holmes & Tuomi, 2022). Or perhaps we can look to other symbolic taxonomies to illustrate the cost of our consumption. Consider, for instance, if during the time our search query or prompt results were being formulated, the user were to visualise the incremental melting of a glacier in Greenland, the impact of gale force storm winds striking a family home in North Carolina, the sun striking a barren field in South Africa, a tree succumbing to wildfires in Argentina, or the gradual bleaching of a coral reef off the Australian coast. Could such in situ interventions serve to foster a greater sense of intentionality or even serve to restrain the often arbitrary exercise of AI consumption in educational environments? This paper seeks to rematerialise the dematerialised within AIED, or to at least make it legible, as we increasingly marry our teaching and learning practices to these energy-intensive technologies. Reflecting on his own practice as a Learning Technologist supporting the adoption of AI technologies, the researcher seeks ways to embed a more tangible awareness, or visibility, of energy consumption within our digital learning environments, and to propose some methods by which we can factor energy consumption into our learning design, with the aim of adapting practices of degrowth (Selwyn, 2023). References Blum, A. (2019). Tubes : a journey to the center of the internet. Ecco. Holmes, W., & Tuomi, I. (2022). State of the art and practice in AI in education. European Journal of Education, 57, 542–570. Saenko, K. (2023). A Computer Scientist Breaks Down Generative AI’s Hefty Carbon Footprint. Scientific American. Selwyn, N. (2023). Lessons to be learnt? Education, techno-solutionism and sustainable development. in Sætra, H. (ed). Techno-solutionism and sustainable development. Routledge Weller, M. 2022. Metaphors of Ed Tech. Athabasca University Press. |
2:00pm - 3:10pm | Afternoon parallel session 2 Location: F215 Session Chair: James Brunton |
|
Meeting the Opportunities and Challenges of Generative AI through Student Partnership: Co-Creation and the Development of Student GenAI guidelines at Maynooth University. Maynooth University, Ireland With the release of ChatGPT in 2022, many educators and commentators realised the potential of Large Language Models (LLMs) to disrupt education. Much ink has been spent discussing the challenges of these technologies (particularly to academic integrity), while also acknowledging their potential affordances (Cotton, Cotton, and Reuben Shipway, 2023; Mollick and Mollick, 2023; Mollick and Mollick, 2023b). However, the boosterism that has surrounded their release and promotion has skewed discussion concerning the future of GenAI in higher education: speculations about personalized learning, the ‘potential’ affordances of GenAI and pronouncements of rapid uptake in the current and future workplace appeal to the skills agenda which has come to dominant neo-liberal institutions of higher education (for claims of productivity increases see Eloundou, Manning, Mishken & Rock, 2023; see also Microsoft, 2024). Given the confusion about the impact of these technologies, it is no surprise that students are concerned, ill-prepared and poorly informed about their use. This paper focuses on Maynooth University’s response to this challenge: the use of student/staff partnership in the co-creation of its student facing GenAI guidelines. It explores the purpose of the project, its format, reflections on the creation process and finally the guidelines that it produced. In doing so, the paper serves two central purposes: firstly, it adds to existing discussions about the use and misuse of GenAI in higher education; secondly, it argues for the essential role of staff/student co-creation in responding to this. Reference List Cotton, D. R. E., Cotton, P.A., & Reuben Shipway, J. (2023). Chatting and Cheating. Ensuring Academic Integrity in the Era of Chatgpt. [Electronic Version] Innovations in Education and Teaching International, 61(2), 228–239. https://doi-rg.may.idm.oclc.org/10.1080/14703297.2023.2190148 Accessed 29 May 2024. Eloundou, T., Manning, S., Mishken, P., and Rock, D., (2023). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models. [Electronic Version] PREPRINT (Version v). Arxiv. https://doi.org/10.48550/arXiv.2303.10130. Accessed 4 May 2023. Mircosoft (2024). Generative AI in Ireland 2024 – Adoption Rates and Trends. https://pulse.microsoft.com/en-ie/work-productivity-en-ie/na/fa1-generative-ai-adoption-rates-are-on-the-rise-in-workplaces-according-to-our-latest-report-supported-by-trinity-college-dublin/ Accessed 23 May 2024. Mollick, E. R., and Mollick, L. 2023. New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments. [Electronic Version] Preprint, SSRN. https://doi.org/10.2139/ssrn.4300783. Mollick, Ethan R. and Mollick, Lilach (2023b), Using AI to Implement Effective Teaching Strategies in Classrooms: Five Strategies, Including Prompts (March 17, 2023). [Electronic Version] The Wharton School Research Paper, Available at SSRN: https://ssrn.com/abstract=4391243 or http://dx.doi.org/10.2139/ssrn.4391243 Integrating Automated Writing Evaluation with Teacher Feedback: Enhancing Writing Accuracy and Autonomy in Turkish EFL Classrooms Mary Immaculate College, Unıversity of Limerick, Ireland This study explores the impact of integrating Automated Writing Evaluation (AWE) with traditional teacher feedback on the writing performance of Turkish EFL students. Using a quasi-experimental design, the research aims to determine whether the combined use of automated and human feedback can enhance students' writing scores and accuracy more effectively than teacher feedback alone. The study was conducted with 120 undergraduate EFL students, who were divided into an experimental group receiving combined feedback and a control group receiving only teacher feedback. Data were gathered through pre-test and post-test writing tasks, error analysis reports generated by the Criterion AWE tool, and student reflections on their feedback experience. The results indicate that both feedback approaches led to improvements in students' overall writing scores, with no statistically significant difference between the groups in terms of overall performance. However, the experimental group showed a more pronounced reduction in grammatical and mechanical errors, suggesting that the integration of AWE with teacher feedback may be more effective in addressing these specific aspects of writing. Students in the experimental group benefited from the immediate, detailed, and accessible nature of automated feedback, which allowed them to correct errors while their ideas were still fresh. Additionally, participants reported that the promptness and specificity of AWE feedback motivated them to improve their writing. Despite these benefits, some students expressed concerns over the limited focus of automated feedback on content and occasional vague or incorrect advisory messages. The findings underscore the potential of combining AWE systems with traditional feedback to alleviate the burden on teachers by allowing them to focus more on content and higher-order writing concerns. Moreover, the use of automated feedback fosters a more autonomous, learner-centered environment, encouraging students to self-regulate and engage actively in the writing process. The study's results align with existing literature on the advantages of technology-enhanced language learning, highlighting the importance of immediate feedback in facilitating learning and reducing recurrent errors. However, the study also acknowledges limitations, such as the non-randomized sampling, the specific context of English majors, and the absence of a delayed post-test to assess long-term effects. Future research should investigate the integration of AWE in diverse learner groups and instructional contexts, as well as explore its sustained impacts on writing proficiency. Turning Off The Gaslight: The Best Of Scholarly Critical Thinking As A Response To GenAInt's Dystopias various open education NGOs, including Open Washington, Open Oregon, Creative Commons, etc., Italy It is said that under Mussolini, at least the trains in Italy ran on time. Similarly, there are probably some use cases of generative AI [genAInt] which make the world a better place -- some tools using large language models [LLMs] to support learners with disabilities come to mind -- but nearly all proposed uses are actually quite dystopian if one stops to look with a calm but critical eye. Moreover, turning down the gaslighting from big tech companies hoping to justify the hundreds of billions of dollars of investment they hope to continue receiving, it is should be clear that the fundamentals of genAInt are horrible for the global climate, for the lives of creators, for the fundamental ethics of the academy, for the gap between rich and poor, powerful and powerless. In fact, I would argue that in the context of education, one of the deepest wounds the current genAInt hype cycle is inflicting is to fundamentally devalue human knowledge, experience, and expertise: if an LLM can spit out a brand new calculus or art history textbook in an instant, what use are disciplinary experts ... and why would students waste their time building that expertise -- getting an education! -- if they can get the same outputs typing prompts into LLMs? It is not a time to abandon the very idea of education and expertise when major global empires have democratically elected leaders who lied (and continue to lie) to the public about basic history, science, and economics. The hucksters of genAInt solemnly evoke the existential danger of runaway artificial general intelligence -- which honest computer scientists know is as distant a dream today as it was when Alan Turing founded their subject three quarters of a century ago -- while in fact it is the concentration of wealth, destruction of our climate, and attempted destruction of the idea of expertise which truly threaten our world. Instead, it is a perfect time to think critically about what the science of LLMs tells us they really are and really can ever do, to love and use technology where it empowers humans and otherwise makes the world a better place as even the original Luddites did (contrary to the usual connotations of that word; as described in a recent book [Merchant: Blood in the Machine, 2023]) but to hate and fight against technology which steals, surveils, empowers the powerful and disenfranchises the powerless. Schools and universities can protect their communities by strong policies, and nations or transnational associations like the European Union can use strong laws to protect their information ecosystems from the enshittification which genAInt is rolling over the internet like a climate change-energize hurricane. Dueling Discourses on AI in Higher Education: Critically Surfacing Tensions and Grounding Narratives in Context Dublin City University, Ireland Two things can be true at the same time: 1) AI tools represent a rapidly developing area of innovative and disruptive technological advancement that impacts on the operation of higher education programmes and institutions, which demands that this be given attention at every level of higher education institutions; and 2) an unsettling amount of the discourse in and around the use of AI tools in higher education is confused, contradictory, untethered from relevant context, and does more harm than good. This paper will chart tensions between the different ‘AI in higher education’ discourses and will ground them in different, relevant contexts, incorporating reflections on the author’s academic experiences and practices. Discourses on ‘AI in higher education’ frequently occur with AI tools discussed as something new that students are using and staff need to learn about such that they can adapt their teaching and assessment practices to this one technology/set of technologies. These discussions are frequently divorced from any discussion of existing institutional digital competency frameworks and related, strategic, resourced capacity building for staff and students. Discourse also frequently acknowledges the myriad ethical issues with staff and student use of different AI tools, while in the same breath saying that use of AI tools is both unavoidable and desirable. This paper puts forward that any attempt to bring a technology/set of technologies into educational practices should be grounded in that ecosystem of evidence-based capacity building in digital competencies, and a framework for ethical use of technology in teaching and learning. The ‘students are using it, so staff have to embrace it’ discourse is also highly aligned with the techno-deterministic and techno-optimistic narratives utilised by technology/edtech companies in the past, as detailed in critical digital pedagogy scholarship. For example, the recent positioning of edtech companies as potential saviours for institutions during the COVID-19 pandemic, offering technology products that could be adopted as part of a ‘pandemic pedagogy’. This paper puts forward that such narratives should be set in the context of critical scholarship on the potentially long-term consequences of engaging in this type of ‘magic buttonism’ rather than investing in staff capacity building and institutional structures to support staff and student engagement in digital teaching and learning. There is a wealth of research and scholarship detailing that academic staff experience significant levels of stress, burnout, poor work-life balance with working during evenings and weekends being commonplace, and that this has been getting worse over time. Discourse on ‘AI in higher education’ frequently constructs a need for staff to individually upskill in a set of rapidly developing innovative and disruptive technologies, divorced from any acknowledgment of existing, problematic workload practices and related academic culture issues detailed in the literature. This paper puts forward that the practice of higher education institutions devolving ultimate responsibility for complex, systematic issues, such as ‘AI in higher education’ down to the level of the individual academics while simultaneously not addressing issues with academic work role definitions, workload, and underpinning academic culture will only serve to exacerbate the psychosocial hazards of academic work, with consequent, negative occupational health outcomes. Finally, this paper will envision a more hope(punk)ful discourse on AI in higher education, based on a just digital transformation agenda, pedagogies of kindness and care, and open and inclusive educational practices. |
Contact and Legal Notice · Contact Address: Privacy Statement · Conference: Education after the algorithm |
Conference Software: ConfTool Pro 2.6.153 © 2001–2025 by Dr. H. Weinreich, Hamburg, Germany |