Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Please note that all times are shown in the time zone of the conference. The current conference time is: 10th May 2025, 03:51:00am IST
|
Session Overview |
Date: Friday, 21/Feb/2025 | |
8:30am - 9:10am | Registration and coffee Location: Cregan Library Foyer, DCU St. Patrick's campus |
9:10am - 9:20am | Welcome and Opening Location: Seamus Heaney Theatre G114. |
9:20am - 9:55am | Opening keynote address with Dr Helen Beetham Location: Seamus Heaney Theatre G114. |
|
Hacking And Rewilding As Educational Resistance University of Manchester, United Kingdom There can be no education 'after the algorithm' if we understand the algorithm as a cultural heuristic for individual activity. Like technology itself, algorithmic practice may be integral to being human. Education develops new cultural actors who can renew as well as reproducing the algorithms of the past. But we are living, Helen argues, through a wave of cultural dispossession. Cultural artefacts and practices have been extensively digitised, and access to them is mediated through a few powerful digital platforms. Data subjection has become the price of cultural participation. In generative AI models, we see cultural artefacts being captured wholesale in data architectures that promise powerful capabilities, but with a new price: our ability to think outside the box. In this keynote, Helen considers reasons for education to resist the power of the algorithm in this particular form. From interviews with educators on both sides of the line, she brings forward two modes of resistance - hacking and rewilding - and suggests that education needs both, in dialogue with each other, if it is to be a space for cultural renewal. |
10:00am - 11:25am | Morning parrallel session 1 Location: F205 Session Chair: Michał Wieczorek |
|
10:00am - 10:15am
Reimagining AI Imagery: Creating Realistic Visuals for Better Explainability of AI 1ADAPT; 2TU Dublin; 3DCU; 4MTU ‘The Bigger Picture: Reimagining AI Imagery’ is a collaborative, interdisciplinary initiative designed to address public misconceptions and concerns about Artificial Intelligence (AI) (The Bigger Picture, 2024). The project offers a timely opportunity to reshape perceptions of AI through innovative visual storytelling, moving beyond the sci-fi-inspired tropes, limited aesthetics and inaccuracies that dominate current stock image libraries. The predominance of anthropomorphised robots, glowing brains and binary code fail to convey what AI truly is or how it functions. More critically, the current imagery reinforces misconceptions and restricts wider public understanding of the actual, realistic uses and impact of AI on society (Dihal & Duarte, 2023). ‘The Bigger Picture’ project primarily focuses on promoting Explainable and Communicable AI by fostering informed discussions about the implications of AI in the creative industries and encouraging public participation in the process of artistic creation, interpretation, and critique. Through a series of participatory workshops, artist commissions, public exhibitions, a website and accompanying zine, the project invites communities to engage directly with concepts around how AI is represented in art and media. A central objective of the project was to promote Explainable AI—the goal of making AI technologies understandable and approachable for non-technical users. By introducing some basic concepts around how explanations for AI are depicted in visual terms such as using decision trees, if-then binary statements or showing feature relevance (Sheridan, Murphy, & O'Sullivan, 2024) participants became familiar with the underlying concepts and inner workings of AI. Participants also explored the broader implications of AI on visual culture by creating images that referenced or were inspired by their own feelings around AI both before and after each workshop and through an uncoding exercise designed to explain what happens during ‘prompting’ in generative AI. Lastly, participants undertook a pictogram design challenge exploring how to visually depict complex themes around how AI works which encouraged critical thinking about the impact of AI on creative practice and on society more widely. Workshop outputs informed a call for submissions titled “AI is Everywhere” (Call for Submissions: The Bigger Picture, 2024). This call challenged artists and image-makers to depict AI’s current realities rather than relying on dystopian or futuristic clichés. Importantly, submissions were required to be non-AI-generated, emphasising human creativity and reflection. Themes explored included AI’s inherent humanity and its integration into daily life, contrasting with its frequent portrayal as non-human or purely mechanical. During Science Week 2024, ‘The Bigger Picture’ exhibitions showcased thought-provoking artwork on the theme "AI is Everywhere". Selected images were also included in the Better Images of AI online library (Better Images of AI, n.d.), a resource promoting accurate and diverse AI representations. ‘The Bigger Picture’ demonstrates how interdisciplinary collaboration can challenge dominant narratives about AI and empower communities to engage critically with its impact on visual culture and society. By combining XAI concepts with creative expression, ‘The Bigger Picture’ is re-imagining how AI is understood, represented and contextualised in imagery. References Better Images of AI. (n.d.). Betterimagesofai.org. https://betterimagesofai.org/images Dihal, K., & Duarte, T. (2023). Better images of AI: A guide for users and creators. Cambridge and London: The Leverhulme Centre for the Future of Intelligence and We and AI. Call for Submissions: The Bigger Picture. (2024, October 21). The Research Ireland ADAPT Centre for AI-Driven Digital Content Technology. https://bit.ly/The-Bigger-Picture2024 Sheridan, H., Murphy, E., & O'Sullivan, D. (2024). Human centered approaches and taxonomies for explainable artificial intelligence. Conference papers, (427). Retrieved from https://arrow.tudublin.ie/scschcomcon/427 The Bigger Picture. (2024). The Bigger Picture. https://thebiggerpictureai.com/ 10:15am - 10:30am
Redesigning Assessments for an AI Future: Partnering with Students and Educators in Co-Design DCU, Ireland Partnerships between students and educators in co-designing the curriculum are becoming more prevalent. However, there is a scarcity of research in the area of assessment co-design (Deeley & Bovill, 2017) and there are few opportunities for full student participation in the assessment process (Dervan, 2018). As students are key stakeholders in their own learning, there is a need to understand effective assessment design from a student perspective, in their own words, if effective practice is to be supported. While the educator is represented as the key decision-maker in assessment of learning, Nieminen (2022) suggests that the validity of this type of assessment might be improved by involving students in assessment design. This article explores the relationship between students as partners (SaP) and assessment co-design in an effort to re-balance power dynamics in higher education (Hassan et al., 2022) and respond to the challenges of an AI future. The role of student as co-creator seems to be more of an aspiration than reality for many. Some studies include co-design through creation of rubrics, choice of assessment (O’Neill, 2017) and scheduling of assessment. However, there is little evidence in relation to co-design of assessment tasks in the extant literature. Combining a partnership approach with assessment co-design could represent an innovative and authentic alternative to traditional assessment methods. In the context of the rise of generative artificial intelligence, effective assessment co-design could enable educators to reimagine their assessment strategies in a creative way. This paper presents preliminary results from an assessment co-design workshop where students chose assessment combinations from 16 eportfolio type assessment activities as part of their assessment strategy. They chose the weightings, group or individual and the scheduling of the assessment activities. The educator student assessment (ESA) model used for the workshop was based on the work of Diane Laurillard (2012) who advocates for designing educational experiences that actively involve students in their own learning processes. The ESA model responds to the challenge of AI by emphasising diverse, active learning methods that foster critical thinking and collaboration, skills which AI cannot easily replicate. The co-design workshop allowed the students to tailor the assessment experiences to their individual and group needs. By focusing on production, practice and collaboration assessment activities, the students created, discussed and reflected on content created in balance with using AI as a support tool. Preliminary findings indicate student expectations of clear assessment briefs, assessments involving teamwork and constant feedback on assessment work in order to grow and flourish. As part of their design, they included group and individual tasks, an AI related assessment and a video production assessment. They also highlighted supports needed to complete each task and pointed to the challenges they faced in terms of deciding the timing and mix of assessment weightings. References Deeley, S. J., & Bovill, C. (2017). Staff student partnership in assessment: enhancing assessment literacy through democratic practices. Assessment & Evaluation in Higher Education, 42(3), 463-477. https://doi.org/10.1080/02602938.2015.1126551 Dervan, P., (2018). Empowering students to perform an enhanced role in the assessment process: Possibilities and challenges. Transforming our World Through Design, Diversity and Education, 527-538. Hassan, O., Foley, S., Cox, J., Young, D., McGrattan, C., & Bheoláin, R. N. (2022). Steps to Partnership: Developing, Supporting, and Embedding a New Understanding for Student Engagement in Irish Higher Education [Article]. AISHE-J: The All Ireland Journal of Teaching & Learning in Higher Education(1), 1-19. Laurillard, D. (2012). Teaching as a design science: building pedagogical patterns for learning and technology. London: Routledge. Nieminen, J. H. (2022). Assessment for Inclusion: rethinking inclusive assessment in higher education. Teaching in Higher Education, 1-19. https://doi.org/10.1080/13562517.2021.2021395 O’Neill, G. (2017). It’s not fair! Students and staff views on the equity of the procedures and outcomes of students’ choice of assessment methods. Irish Educational Studies, 36(2), 221-236. https://doi.org/10.1080/03323315.2017.1324805 Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D.,…Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Journal of Clinical Epidemiology, 134, 178-189. https://doi.org/https://doi.org/10.1016/j.jclinepi.2021.03.001 10:30am - 10:45am
Public Landscaping, Counter/Cartography, and the Desire Path not Taken: Zines as Ludic Pedagogical Tools Institute of Public Administration, Ireland While in the classroom we insist on carefully cultivating an awareness of style, genre, and voice, LLMs (Large Language Models) generate "content", even hallucinate it. There, language is instantly levelled into a semi-corporate voice ironically termed "Delvish" by science fiction author Bruce Sterling. Despite its overuse of the verb "to delve", AI is criticised precisely for not straying far enough from its glassy surface, uncritically replicating unexamined biases (Safiya Umoja Noble). Worse still, as information is decontextualised, truncated, and remixed, students increasingly become "format agnostic", reduced to the amorphous term "content creators". For those not already trained prompt engineers with a keen critical eye, the road most travelled is that of least resistance, leading them down to the lowest common denominator where unadventurous style replaces paradigm-shifting substance. What does non-fungibility in instructional design feel like in a post-AI world? Zines constitute an essential pedagogical tool for media literacy precisely because they model critical recontextualisation. Embedded in a "subaltern counterpublics" (Nancy Fraser) permaculture with a low entry barrier thanks to its playful DIY ethos, zine-making scaffolds a dialogic (Mikhail Bakhtin), oppositional counter/cartography that glitches the deceivingly atemporal re-presentation model of LLMs. Indeed, they can help future-proof education through a multipronged strategy. Firstly, through Universal Design for Learning, they can be used to help students develop sensory literacies, creating an inclusive learning environment that values the perspectives of learners who are neurodivergent, visually impaired, chronically ill, and/or deal with dyspraxia, dyscalculia, dyslexia, or dysgraphia. As disability activist Imani Barbarin points out, being able-bodied comes with inbuilt planned obsolescence. In a world where haptic feedback is slowly phased out in favour of less accessible hi-tech options even in instances where access is the chief concern (e.g. touch screen elevator panels without buttons, Braille markings or floor announcements), we can steer students towards exploring different modalities, from screen reader versions of digital zines to incorporating tactile markings by using mixed media or engaging with traditional crafts, such as co-creating crochet zines or assembling memorial quilt panels. Secondly, given how resource-intensive AI is, zines offer a low-tech alternative prioritising experimentation over consumption, all while proposing a salvagepunk approach that rejects the inevitability of apocalyptic climate disaster projected by the Anthropocene. Thirdly, actively engaging in a community craft while making space for false starts and repeated course correction with a collaborative ("yes, and") approach can gradually alleviate learned helplessness (Martin Seligman). This is especially important as we recognise that mental health struggles from the pressure of social injustice constitute a public health concern, as they represent an epigenetic cause of autoimmune disorders. Consequently, zines can be used to empower students to seek, build, and maintain inclusive co-creative partnerships, promoting a sustainable approach to lifelong learning through ludic pedagogy. Bakhtin, M. (1981). The Dialogic Imagination: Four Essays. University of Texas Press. Brabazon, T. (2015). Enabling University: Impairment, (Dis)ability and Social Justice in Higher Education. Springer International Publishing. de Bruin-Molé, M. (2021). "Salvaging Utopia: Lessons for (and from) the Left in Rivers Solomon’s An Unkindness of Ghosts (2017), The Deep (2019), and Sorrowland (2021)". Humanities, 10(4). Fraser, N. (1990). "Rethinking the Public Sphere: A Contribution to the Critique of Actually Existing Democracy". Social Text, 25(25/26), 56–80. Geronimus, A. (2023). Weathering: The Extraordinary Stress of Ordinary Life on the Body in an Unjust Society. Virago Press. Kinkaid, E. (2023) "foreword: the desire to counter". you are here: the journal of creative geography. University of Arizona. Noble, S.U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press. OCLC (Online Computer Library Center). (2004) "Information Format Trends: Content, Not Containers". 10:45am - 11:00am
Student and Student Teacher Perceptions of generative (Gen)AI in the Classroom 1Office of the Registrar, Hibernia College.; 2School of Education, Maynooth University. As microcosms of society (Battalio, 2005), classrooms act as playgrounds for emerging ideas. Engagements with GenAI in the classroom take a variety of forms from use as learning tools, assessments responses and adaptive learning routes. There is an urgent need for teachers to become acquainted with GenAI in its diverse uses by, for and with students, to determine a rationale for pedagogical and assessment decisions around its use (Chiu, 2023). It is imperative for teachers to start preparing students for a new ethical landscape (Farrelly & Baker, 2023). To do so, they must understand the attitudes of students. This paper presents the outcomes of research conducted in multiple primary schools and with student teachers, using workshops and focus groups. It explores the attitudes of both groups to engagement with GenAI and addresses three research questions;
Our findings indiciate that both groups share concerns for that the same negative endpoint might be reached even if the mode of expression differred. The school context had some impact on the perceptions of GenAI use amongst pupils. This was manifested in student teacher concerns that uptake and use of GenAI in a constructive manner would rely heavily upon parental/guardian input. The presentation will share its findings and contextualise these along with possible implications for teacher education, and the inclusion of academic integrity and GenAI in current curricula. References: Battalio, R. (2005). Setting the stage for a diverse audience. Kappa Delta Pi Record, 42(1), 24–27. https://doi.org/10.1080/00228958.2005.10532081 Chiu, T. K. F. (2023). The impact of Generative AI (GenAI) on practices, policies and research direction in education: a case of ChatGPT and Midjourney. Interactive Learning Environments, 1–17. https://doi.org/10.1080/10494820.2023.2253861 Farrelly, T., & Baker, N. (2023). Generative Artificial Intelligence: Implications and Considerations for Higher Education Practice. Education Sciences, 13(11), 1109. https://doi.org/10.3390/educsci13111109 |
10:00am - 11:25am | Morning parallel session 2 Location: F215 Session Chair: Steve Welsh |
|
10:00am - 10:20am
After AI? Critical Digital Pedagogies of Place University of Windsor, Canada This session will outline two approaches to contemporary digital education, explore how they are grounded in differing values and visions, and overview a research project on place-based digital pedagogies as a model for post-AI education. If the concept of a post-AI world is intended to parallel that of the post-digital world (52 group, 2009), then post-AI is not a world without Generative AI and other algorithmic tools, but one in which AI is pervasive. This is ‘post’ as omnipresence, wherein a signifier of change becomes itself ubiquitous, embedded across systems. This session unpacks and traces the post-AI educational imaginary – which is arguably upon us – and contrasts its values and trajectory with those of an alternate sociotechnical construct, that of participatory digital practice and pedagogy. Participatory digital practice has its roots in relational practices that utilize the web to engage open and networked contributions to information abundance. This interactive Web 2.0 practice dominated the first ten to fifteen years of the 21st century, and shaped critical digital pedagogy as a participatory and often democratically-informed approach to learning. But over the past decade, the platforms on which participatory digital practices depend have been enclosed by data-extractive and increasingly automated corporate entities. The participatory practices of Web 2.0 have thus been displaced by Web 3.0 and the hype surrounding Generative AI, shifting digital practice away from contribution and co-creation. Because the ‘innovation’ lens of our attention economy emphasizes the capital potential of technologies decoupled from their affordances, the Web 3.0 post-AI imaginary is also largely a ‘black box’ (LaTour, 1987) whose algorithmic structures remain obscured. This trend away from participatory digital practice is amplified by cultural shifts. The promise that poverty and other social ills can be ‘solved’ with technology, framed as technosolutionism (Morozov, 2013) or access doctrine (Greene, 2021), prioritizes a skills focus that aligns with capital interests rather than supporting social structures or criticality. Solutionist thinking underpins much of the hype about GenAI in education, and leads to decision-makers acting on behalf of capital rather than students. In the wake of the COVID19 emergency online pivot, learners themselves often view education as a task-oriented process. These intersecting trends toward an instrumentalized and algorithmic educational imaginary reinforce AI fantasies about futures decoupled from collective human cooperation. If we abandon digital pedagogy’s participatory roots in favour of the block box of the algorithm, we risk outsourcing the entire learning process away from human cognition, creativity, and connection. As an alternative to the Web 3.0 version of a post-AI world, this session will outline place-based pedagogies as active, situated knowledges (Haraway, 1988) that can support digital participatory practices and critical pedagogical approaches. Emphasizing multiliteracies, agency, and relationship-building over solutionist skill acquisition, place-based participatory pedagogies are sociomaterial practices shaped by the specifics of built environments and digital spaces, geographies, cultures, personal attitudes, identities, and interests (Gravett & Ajjawi, 2022). The session will overview a 2024-2025 research project with the University of Highlands and Islands in Scotland, outlining how participation, opportunities for local and global contribution, and the enlistment of educators and learners in the firsthand experience and shaping of local life (Gruenewald, 2003) can form a basis for refusing full transition from Web 2.0 to Web 3.0. The session will emphasize the critical importance of preserving participatory learning experiences and connection as counterpoint to automated outputs, and underscore the role of educators in creating agential choices about which post-AI world we reinforce and validate. Gravett, K., & Ajjawi, R. (2021). Belonging as situated practice. Studies in Higher Education, 47(7), 1386–1396. https://doi.org/10.1080/03075079.2021.1894118 Greene, D. (2021). The promise of access: Technology, inequality, and the political economy of hope. MIT Press, Gruenewald, D. A. (2003). Foundations of place: A multidisciplinary framework for place-conscious education. American Educational Research Journal, 40(3), 619-654. https://doi.org/10.3102/00028312040003619 Haraway, D. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist Studies, 14(3), 575–599. https://doi.org/10.2307/3178066 Latour, Bruno (1987). Science in action: How to follow scientists and engineers through society. Cambridge: Harvard University Press. Morozov, E. (2013). To save everything, click here: The folly of technological solutionism. PublicAffairs. 10:20am - 10:40am
Designing Equitable Assessment Futures: Lessons From Students' Use Of Generative AI University of Cape Town, South Africa This study explores the motivations behind university students’ engagement with generative AI (genAI) tools for assessment support, offering insights into their behaviours and decision-making processes. Conducted at the University of Cape Town, the research draws on three focus groups comprising 18 undergraduate students from diverse faculties and programmes to explore whether, how, and why students use genAI to support their assessment practices. Findings revealed a spectrum of behaviours, from reliance on genAI for translating complex disciplinary language and task analysis to summarising content, enhancing assignment quality, and improving efficiency in information retrieval. Some students described how genAI tools enabled them to take greater ownership of their academic work by guiding ideation, improving clarity, and providing a sense of control over challenging tasks. However, non-usage was also noted, influenced by concerns about plagiarism accusations, institutional guidance, and the perceived irrelevance of genAI for certain tasks. These varied behaviours point to a continuum of student agency, with some students viewing genAI as a tool to enhance learning autonomy, while others felt constrained by the risks and limitations of its use. The research has drawn on the COM-B framework (Michie et al., 2011) to better understand these behaviours, emphasising the interplay of Capability (e.g., students' AI literacy), Opportunity (e.g., accessibility of tools and societal norms), and Motivation (e.g., perceived utility and ethical considerations). The study highlights critical implications for higher education practice, particularly in reviewing and shaping assessment practices to account for genAI's evolving role. These insights are especially pertinent in the context of extreme inequalities that characterise the South African higher education sector, where students' opportunities to engage with genAI may differ significantly. Short-term recommendations include fostering AI literacy and co-creating equitable policies for genAI usage, striving towards clarity and consistency to mitigate disparities. Medium- to long-term strategies could involve redefining academic integrity norms and standards, as well as addressing and reconceptualising blurred boundaries between human and AI contributions. By bridging behavioural insights with practical interventions, this research contributes to the discourse on the possibilities and challenges of ethical, equitable and transformative uses of generative AI in education, while highlighting the importance of supporting students’ agency in navigating these tools. References Michie, S., van Stralen, M. M., & West, R. (2011). The behaviour change wheel: A new method for characterising and designing behaviour change interventions. Implementation Science, 6(42). https://doi.org/10.1186/1748-5908-6-42 10:40am - 11:00am
GenAI Integration Challenges: Learner Expectations and Effects on Trust Dublin City University, Ireland While initial responses across academia to the seemingly sudden emergence of highly capable chat-interfaced AI in the post-GPT period have focused largely on the implications of such techology for plagiarism - and an implied lack of trust in how learners might adopt and adapt to these tools - this research presentation inverts the teacher-learner direction to investigate how emerging AI has affected student trust of academics. A research project by the author in mid-2024 (Mulligan, 2024 forthcoming), which focussed on tracking attitudes among media students in Irish universities to emerging AI technology and tools, found that regardless of actual use of AI by teachers in the three insitutions studied, students suspected that the existence of the technology implied that it must already be in use by their lecturers. This finding was further confirmed in an on-going wider study, where undergraduate student focus groups in a national set of institutions repeated suspicions that AI is being surrepticiously used by teaching staff at the same time as being banned or remaining undiscussed for student use. This submisison will present the findings of this wider study, developing insights from students themselves on how their trust in academic fairness and the conduct and quality of their teaching staff is undermined by a pervasive perception that AI tools are being hypocritically used in the development of content or the assessment of work, while remaining unavailable to students. The research provides a novel and timely set of insights on learner attitudes and expectations and provides imperatives for the continuing development of Teaching & Learning practices at a time of considerable upheaval in the wake of GenAI. Alongside analysis of these negative effects on academic process reputation, the focus group findings provide insights on the state of learners' critical engagment with AI shortcomings, their perception of the relevance of AI tools to graduate skill profile and career plans, and their sources of information on emerging GenAI. Complementing existing studies of student experience and perceptions in other geolocales (e.g. da Silva et al. 2024; Mireku, Kweku, & Abenda 2024), the study adds Irish undergraduate student perspectives, drawn from several disciplines and regions. References: da Silva, Monica & Ferro, Mariza & Mourão, Erica & Seixas, Elaine & Viterbo, Jose & Salgado, Luciana. (2024). Ethics and AI in Higher Education: A Study on Students’ Perceptions. 10.1007/978-3-031-54235-0_14. Mireku, Martin & Kweku, Alfred & Abenba, Daniel. (2024). HIGHER EDUCATION STUDENTS' PERCEPTION ON THE ETHICAL USE OF GENERATIVE AI: A STUDY AT THE UNIVERSITY OF CAPE COAST.. 10.13140/RG.2.2.10274.64967. Mulligan, D (2024, forthcoming) "Hypocritical much?" - Attitudes to Generative AI Tools in an Irish Media Education Context. Teaching Media Quarterly 11:00am - 11:15am
Token Offerings: Contemplating the Cost of our Invisible Transactions within AIED Environments Dublin City University TEU, Ireland When educators and institutions embed AI-driven tools within our learning environments, what is the true cost of the contracts we’re signing on behalf of our learners (Saenko, 2023)? When we’ve made the complex transactions between prompt, calculation, and output invisible, what are we obscuring (Blum, 2019)? While the developers of large language models designate ‘tokens’ as the units of quantification by which characters, phonemes, and phrases are consumed and produced, this paper asks what metaphors (Weller, 2022) might be more appropriate as we hurl towards a world made too hot by the sum of all our clicks. Perhaps the petrochemical metaphor is more apt today than when Clive Humby first declared “data is the new big oil!” almost two decades ago (Holmes & Tuomi, 2022). Or perhaps we can look to other symbolic taxonomies to illustrate the cost of our consumption. Consider, for instance, if during the time our search query or prompt results were being formulated, the user were to visualise the incremental melting of a glacier in Greenland, the impact of gale force storm winds striking a family home in North Carolina, the sun striking a barren field in South Africa, a tree succumbing to wildfires in Argentina, or the gradual bleaching of a coral reef off the Australian coast. Could such in situ interventions serve to foster a greater sense of intentionality or even serve to restrain the often arbitrary exercise of AI consumption in educational environments? This paper seeks to rematerialise the dematerialised within AIED, or to at least make it legible, as we increasingly marry our teaching and learning practices to these energy-intensive technologies. Reflecting on his own practice as a Learning Technologist supporting the adoption of AI technologies, the researcher seeks ways to embed a more tangible awareness, or visibility, of energy consumption within our digital learning environments, and to propose some methods by which we can factor energy consumption into our learning design, with the aim of adapting practices of degrowth (Selwyn, 2023). References Blum, A. (2019). Tubes : a journey to the center of the internet. Ecco. Holmes, W., & Tuomi, I. (2022). State of the art and practice in AI in education. European Journal of Education, 57, 542–570. Saenko, K. (2023). A Computer Scientist Breaks Down Generative AI’s Hefty Carbon Footprint. Scientific American. Selwyn, N. (2023). Lessons to be learnt? Education, techno-solutionism and sustainable development. in Sætra, H. (ed). Techno-solutionism and sustainable development. Routledge Weller, M. 2022. Metaphors of Ed Tech. Athabasca University Press. |
10:00am - 11:25am | Morning parallel session 3 (workshop) Location: F220 |
|
Embedding Sustainability Literacies in Postdigital Learning Design 1Atlantic Technological University, Ireland; 2Heriot-Watt University, United Kingdom This interactive, unplugged workshop will enable participants to reflect on critical approaches to digital learning design through the exploration of sustainability literacies. Modelling the reuse of ephemera through zine making, participants will examine creative approaches to embedding sustainability practices in their learning design and curricula. Drawing on previous generative and collective critical approaches, this workshop will support participants to develop their sustainability literacies through creative expression. (Thomson et al., 2023; Drumm et al., 2024; Molloy & Thomson, 2024). In a higher education landscape that is “fragmenting and fragile”, with heightened anxieties about evolving AI technologies and global polycrisis, it is worth acknowledging that our design approaches are not neutral, and must respond to the current complexities. (Cronin & Czerniewicz, 2023). Leveraging a postdigital approach, which acknowledges the “broader social, political, economic and environmental conditions” intrinsic to learning design, participants will explore micro, meso, and macro levels of their design practice to address broader complexities, including the paradox of climate anxiety with an ever increasing demand for technological abundance and institutional and sectoral-led digital transformation. (Carvalho & Yeoman, 2023). Rather than adopting a utopian/dystopian lens, participants will examine convivial approaches to embedding sustainability literacies, such as degrowth and rewilding. Through zine making, the workshop will explore “how technologies can be adopted and adapted in ways that lead to expanded freedom, creativity, autonomy and happiness”. (Selwyn, 2024, p. 191) Participants will be equipped with pre-folded zine booklets and an assortment of ephemera to create zines that represent their unique reflections on embedding sustainability, degrowth, and rewilding into their ed tech usage and design practices. The facilitators propose to develop a collection of crowd-sourced educator activities based on the workshop outputs for a collaborative OER, modelling the intersections of sustainability literacies, creative futures, and learning design. References Carvalho, L., & Yeoman, P. (2023). Postdigital learning design. In P. Jandrić (Ed.), Encyclopedia of postdigital science and education (pp. 1–7). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-35469-4_38-1 Cronin, C., & Czerniewicz, L. (2023). Higher education for good. In Czerniewicz, L., & Cronin, C. (Eds.), Higher Education for Good (1st ed., pp. 35–52). Open Book Publishers. https://doi.org/10.11647/obp.0363.28 Drumm, L., Beetham, H., Cronin, C., & McIlwhan, R. (2024). Generating AI alternatives: Collaborating and creating intersections. Proceedings of the International Conference on Networked Learning , 14(1). https://doi.org/10.54337/nlc.v14i1.8185 Macgilchrist, F. (2021). Rewilding technology. On Education. Journal for Research and Debate, 4(12). https://doi.org/10.17899/on_ed.2021.12.2 Molloy, K., & Thomson, C. (2023). Humanising learning design with digital pragmatism. In Czerniewicz, L., & Cronin, C. (Eds.), Higher education for good (1st ed., pp. 397–420). Open Book Publishers. https://doi.org/10.11647/obp.0363.17 Molloy, K., & Thomson, C. (2024). Humanising learning design with digital pragmatism. OER24 Conference. Association for Learning Technology. https://blokks.co/schedules/oer24/#/2024-03-28/qmv8976e62ga/workshop-122-humanising-learning-design-with-digital-pragmatism Selwyn, N. (2023). Digital degrowth: toward radically sustainable education technology. Learning, Media and Technology, 49(2), 186–199. https://doi.org/10.1080/17439884.2022.2159978 Thomson, C.. Bell, F., & Drumm L. (2023). Guerrilla edtech responses to climate change: Reframing, rewilding, reimagining. OER23 Conference. Association for Learning Technology. https://femedtech.net/published/guerilla-edtech-responses-to-climate-change-reframing-rewilding-reimagining/ Thomson, C., & Molloy, K. (2024). Humanising learning design with digital pragmatism: OER24 workshop output (version 1). National Teaching Repository. https://doi.org/10.25416/NTR.25868365.v1 Critical Cadence: Reclaiming the Pace of Digital Productivity Maren Deepwell Coaching and Consultancy, United Kingdom Since 2020 we have seen a great acceleration in the adoption of blended learning worldwide. In parallel to these changes in Higher Education, employers have witnessed an evolution of flexible and hybrid working practices. This shift in both learning and working requires us to think beyond current strategies for digital transformation, especially in the context of a growing student population with changing needs in a fast moving, competitive job market impacted by automation and AI. It's time to reclaim the pace of digital productivity. Both in education and at work the frantic pace of productivity is determined by the speed of digital tools. Notifications, comments, emails and other content arrive ever more quickly and demand instant attention. In the advent of AI powered learning and working, the pace of digital content creation is ever increasing. This kind of 'digital noise' can easily lead to digital overwhelm and as a result our senses suffer. From Zoom and listener fatigue to lack of movement, our bodies, our embodied selves and our senses are feeling the impact of our increased reliance on digital productivity. This session will explore strategies for changing the cadence of our digital lives to be a little more… human. To proceed at a pace of thinking, of working and learning, that is more conducive to reflection and to criticality. In essence, to find one's own ‘Critical Cadence’. Session focus:
The overarching aim of the session is to explore how to rewild critical digital literacies alongside digital pedagogies to empower students and staff to determine the pace of working and learning. References Bell, F., Campbell, F., Forsythe, G., Mycroft, L., Scott A. M., (2023) Chapter 10 In Higher Education for Good Open Book Publishers. Carroll, S., Maguire, M., Ginty, C. (2024). Blended learning in higher education: N-TUTORR overview of blended learning and its impact on the student experience. N-TUTORR. Deepwell, M. (2024). Global trends towards flexible, hybrid working and its impact for digital leadership in higher education. N-TUTORR. (due to be published late November 2024) Deepwell, M. (2021) Leading Virtual Teams. Fieldnotes from a CEO. Association for Learning Technology, UK. Doxtdator, B. (2017) A Field Guide to ‘jobs that don’t exist yet’. Long View on Education. |
10:00am - 11:25am | Morning parallel session 4 Location: Seamus Heaney Theatre G114. Session Chair: Peter Tiernan |
|
10:00am - 10:15am
Algorithm to Empathy: Transforming Social Care Education with VR Caregivers Atlantic Technological University, Ireland In this presentation, we propose a new approach to social care education that can help prepare practitioners for a post-AI and post-social robotics era. By moving beyond ‘the algorithm’, virtual reality [VR] caregivers may harness immersive interactions that circumvent many limitations of physical social robots - cost, logistics, acceptance - and can be easily updated. We highlight a four-session curriculum to show how future social workers can imagine, debate and design VR-based solutions that centre on empathy, user-friendliness and cultural sensitivity. This approach encourages social care students to develop deeper insights into the lived experiences of care recipients, such as older adults or individuals with dementia, and into the concept of ‘care’ itself. We also examine vital ethical, privacy and accessibility concerns, ensuring that VR-driven solutions remain person-centred and equitable. Ultimately, with our students, we wish to explore a potential blended future where AI complements, rather than supplants, human care. In doing so, we aim to open up a conversation on how can - or even should - higher education meaningfully integrate immersive technologies for next-generation care. We welcome further discussion and collaboration. 10:15am - 10:30am
Preparing Future Teachers for the AI Era: Exploring AI Readiness, Perspectives, and Literacy in Initial Teacher Education University College Dublin, Ireland The integration of Artificial Intelligence (AI) into education presents both significant opportunities and challenges, particularly for Initial Teacher Education (ITE) programmes tasked with preparing future teachers for its effective and ethical use. However, varying levels of AI readiness among student teachers—encompassing their knowledge, skills, and attitudes toward AI—complicate this process. Drawing on Schepman and Rodway’s (2020) work on AI readiness, this conceptual paper introduces the ‘kaleidoscope of AI perspectives,’ a reflective framework designed to deepen awareness of the varied dispositions that influence AI adoption and use in educational contexts. The paper explores the intersection of AI readiness with the UNESCO AI Competency Framework, offering a structured, dynamic approach to developing AI literacy within ITE. Central to this discussion is the debate over whether AI literacy should be treated as a distinct area or integrated into broader digital literacy frameworks (Holmes, 2022). Additionally, the paper examines where and how AI literacy could be incorporated into ITE programmes, providing actionable recommendations for its inclusion. The authors argue that embedding AI literacy into ITE is critical for equipping future teachers to navigate and employ AI responsibly, ethically, and effectively in educational contexts. This proactive measure is positioned as essential, given AI’s growing influence in education (EC, 2022). By fostering an informed and critical mindset, the proposed framework aims to prepare teachers not only to use AI technologies but also to understand and question their implications for teaching, learning, and equity. European Commission. (2022). Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators. Publications Office of the European Union. https://data.europa.eu/doi/10.2766/153756 Holmes, W., Persson, J., Chounta, I.-A., Wasson, B., & Dimitrova, V. (2022). Artificial intelligence and education: A critical view through the lens of human rights, democracy, and the rule of law. The Council of Europe. ES428045_PREMS 092922 GBR 2517 AI and Education TXT 16x24.pdf Schepman, A., & Rodway, P. (2020). Initial validation of the general attitudes towards artificial intelligence scale. Computers in Human Behavior Reports, 1, 100014. https://doi.org/10.1016/j.chbr.2020.100014 UNESCO. (2024). AI competency framework for students. Paris: United Nations Educational, Scientific and Cultural Organisation. https://doi.org/10.54675/ZJTE2084 10:30am - 10:50am
Pre-Service Teachers’ Experiences and Perceptions of Generative Artificial Intelligence: An International Comparative Study 1Dublin City University, Ireland; 2Arizona State University, USA Generative Artificial Intelligence (GenAI) is reshaping education, creating both opportunities and challenges for teacher education programs (Mishra et al., 2024). As pre-service teachers increasingly engage with GenAI tools like ChatGPT, understanding how institutional and regional contexts shape their experiences and perceptions of GenAI is critical (Celik et al., 2022; Moorhouse & Kohnke, 2024). As part of an international collaborative design-based research project (Hsu et al., 2024), this study compares the experiences and perceptions of pre-service teachers at Dublin City University (DCU) with those of students majoring in education or enrolled in postgraduate education programs at Arizona State University (ASU) regarding the use of GenAI for personal, academic, and professional purposes. This research aims to inform the development of targeted training programs tailored to the specific needs of each institution. Data were collected from 204 DCU participants and 127 ASU participants using a questionnaire with items on a 5-point Likert scale. The survey examined the application of GenAI for personal, academic, and professional purposes, as well as participants’ perceptions of its opportunities, challenges, ethical concerns, and professional development needs. DCU participants reported slightly higher experience levels with GenAI (M = 2.94, SD = 1.46) than ASU participants (M = 2.80, SD = 1.32), though the difference was not statistically significant. Moreover, DCU participants reported significantly more frequent use of GenAI tools (M = 2.43, SD = 1.38) than ASU participants (M = 2.17, SD = 1.14; p < .05, Cohen’s d = 0.21), reflecting a small-to-medium effect size. Both groups recognised GenAI’s opportunities for enhancing teaching and learning, with DCU participants scoring slightly higher (M = 3.47, SD = 0.79) than ASU participants (M = 3.41, SD = 0.90), although a non-significant result. Furthermore, ASU participants perceived slightly more challenges (M = 3.31, SD = 0.85) than DCU participants (M = 3.18, SD = 0.95), but this difference was also insignificant. Significant differences were observed in ethical considerations, with ASU participants expressing more significant concerns (M = 3.38, SD = 0.69) compared to DCU participants (M = 3.07, SD = 0.91; p < .001, Cohen’s d = 0.36), suggesting a medium effect size. Regarding professional development, DCU participants reported a significantly greater need for training on effective use (M = 3.60, SD = 1.10) than ASU participants (M = 3.24, SD = 1.02; p < .005, Cohen’s d = 0.34), also indicating a medium effect size. They further expressed a significantly higher need for training on ethical use (M = 4.24, SD = 0.97) compared to ASU participants (M = 4.00, SD = 0.87; p < .05, Cohen’s d = 0.25), reflecting a small-to-medium effect size. This study underscores the importance of tailoring GenAI-focused professional development to specific institutional contexts, addressing distinct strengths and challenges. The findings highlight the practical significance of these differences on equipping future educators with the skills to leverage AI responsibly in diverse educational institutions. References Celik, I., Dindar, M., Muukkonen, H., & Järvelä, S. (2022). The Promises and Challenges of Artificial Intelligence for Teachers: a Systematic Review of Research. TechTrends, 66(4), 616-630. https://doi.org/10.1007/s11528-022-00715-y Hsu, H.-P., Mak, J., Werner, J., White-Taylor, J., Geiselhofer, M., Gorman, A., & Torrejon Capurro, C. (2024). Preliminary Study on Pre-Service Teachers’ Applications and Perceptions of Generative Artificial Intelligence for Lesson Planning. Journal of Technology and Teacher Education, 32(3), 409-437. Mishra, P., Oster, N., & Henriksen, D. (2024). Generative AI, Teacher Knowledge and Educational Research: Bridging Short- and Long-Term Perspectives. TechTrends, 68(2), 205-210. https://doi.org/10.1007/s11528-024-00938-1 Moorhouse, B. L., & Kohnke, L. (2024). The effects of generative AI on initial language teacher education: The perceptions of teacher educators. System, 122, 103290. https://doi.org/https://doi.org/10.1016/j.system.2024.103290 10:50am - 11:05am
Integrating Generative AI into WebQuest Methodology to Enhance Digital and Information Literacy in Pre-Service Teacher Education Dublin City University, Ireland As artificial intelligence (AI) technologies, particularly generative AI (GenAI), become increasingly prevalent, their implications for education grow more profound. Tools such as ChatGPT offer immediate access to an array of synthesised information, potentially reshaping how students interact with knowledge. However, this accessibility also presents challenges for educators, especially concerning the authenticity, reliability, and educational value of AI-generated content. This paper explores a novel approach to developing digital and information literacy skills in pre-service post-primary teachers through a WebQuest methodology enhanced with GenAI tools. Originally designed to help students engage critically with web-based resources through structured, inquiry-based learning (Dodge, 1997), the WebQuest methodology provides a scaffolded framework that can be adapted to include GenAI, enabling students to build skills in both traditional and AI-mediated research. In this study, we introduce a modified WebQuest designed specifically to engage pre-service secondary teachers with digital literacy in the age of AI. This offers a critical opportunity for students to analyse, question, and contrast information from multiple sources. The modified WebQuest structure begins with an introduction to the topic. Through a selection of curated, reliable resources, including journal articles, vetted websites, and other digital resources, students initially conduct traditional research on the topic. Following this, they engage with GenAI tools by posing questions to explore AI’s capacity to generate information, summarise topics, and provide answers. By comparing AI-generated responses with traditional resources, students gain a deeper understanding of the accuracy, reliability, and potential biases inherent in AI systems. To support this comparative approach, we developed two evaluation rubrics to encourage both self-reflection and structured assessment. The student self-evaluation rubric emphasises self-awareness in evaluating one’s accuracy in understanding content, depth of analysis, and ability to critically reflect on GenAI-generated responses versus traditional sources. For instance, students assess how AI responses align or diverge from journal articles and other verified sources, examining discrepancies or biases they uncover. This process of reflection helps students understand the affordances and limitations of using AI in an educational context, fostering reflection on their digital and information literacy skills. The instructor evaluation rubric complements the student-focused assessment by emphasising pedagogical and analytical competencies. This rubric evaluates students on their understanding of WebQuest topic, effectiveness in comparing sources, and depth of insight in their final analyses. Additionally, it assesses how well students articulate their reflections on the role of GenAI, as well as the clarity and coherence of their final presentation or report. By incorporating both self-assessment and instructor-led assessment, this approach fosters a holistic development of digital and information literacy skills, equipping future educators with a critical toolkit for navigating AI in the classroom (Holmes, Bialik, & Fadel, 2019; Webber, 2018). This integration of GenAI into WebQuest methodology represents a significant pedagogical development, as it enables pre-service teachers to engage with AI while honing essential skills in evaluating information. Given the rapid pace at which AI is reshaping information access, understanding the affordances and limitations of AI becomes essential. Through the proposed methodology, pre-service teachers are guided in developing a critical approach to AI-mediated information. This paper contributes to the conversation on AI and education by offering a framework for using AI tools within an established educational methodology, fostering a digitally literate and discerning generation capable of navigating an AI-driven world. References Dodge, B. (1997). Some thoughts about WebQuests. San Diego State University. Retrieved from http://webquest.sdsu.edu/about_webquests.html Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Boston, MA: Center for Curriculum Redesign. Webber, S. (2018). The impact of artificial intelligence on information literacy. Journal of Information Literacy, 12(2), 1-15. 11:05am - 11:20am
Teachers' Perceptions on the Impact of AI - a Report from the PAIDEIA Erasmus+ Project Dublin City University, Ireland Introduction As AI technologies continue to advance, they open up new avenues for educators — from content creation and automation of administrative tasks to data-driven insights. This raises questions regarding the role of AI in education, and the ethical implications of its integration. This report, part of the PAIDEIA project funded by the Erasmus Plus Programme, delves into the perspectives of educators on the impact of AI in education, now and in the future. It provides an analysis of both the opportunities and challenges presented by AI, offering a range of perspectives from teachers across seven European countries. Methodology The research employs a mixed-methods approach, encompassing surveys and focus groups to gather comprehensive insights. Over 700 teachers from Belgium, Bulgaria, Ireland, Italy, Malta, Spain, and Türkiye participated, providing a diverse view of their current use of AI and their perceptions of AI in educational settings. Surveys were conducted first to establish baseline data, followed by focus groups that allowed for deeper exploration of themes. Findings The findings indicate that AI usage in education among PAIDEIA partner countries is generally low to moderate, with significant variation in how and where AI is applied. AI is sporadically used for tasks like lesson planning, personalising learning, and content creation, while areas such as assessment, feedback, and administrative tasks see even less support through AI tools. Countries like Bulgaria and Ireland show higher adoption of AI to enhance learning experiences, whereas usage in Belgium, Spain, and Türkiye remains minimal. Understanding of AI among teachers also varies widely; while most teachers grasp basic AI principles and ethical considerations, many lack confidence in explaining AI processes, staying current with advancements, and applying AI effectively in educational settings. Teachers across PAIDEIA countries identify challenges such as the reliability of AI-generated information and data privacy issues, with mixed views on whether AI might undermine educational equity, diminish the teacher’s role, or impact teacher-student dynamics. Despite these concerns, educators are generally optimistic about AI’s potential to personalise learning, innovate teaching methods, and engage students, though Italian teachers expressed some hesitancy around these benefits. Teachers’ perceptions of students’ views on AI reveal mixed enthusiasm and awareness, with students generally seen as curious but unclear about AI’s benefits and potential ethical issues. There is broad agreement on the need for mandatory AI training for teachers, with insufficient training provisions noted across countries. Opinions are mixed regarding the adequacy of CPD opportunities, confidence in pursuing further training, and access to online AI resources. Overall, the findings highlight a need for structured, accessible training on AI in education, with a strong emphasis on practical applications, ethical considerations, and tailored CPD resources to build teacher confidence and capacity for AI integration. Conclusion This research provides important insights into teachers’ perceptions of AI in education, revealing that while usage is quite low, teachers recognise the opportunities AI may bring in the future. However, it also highlights the need to address ethical concerns associated with AI, alongside the potential negative effect it may have on student creativity and critical thinking. The study strongly emphasises the need for comprehensive training programs for educators and clear guidelines on the use of AI in educational settings. References Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial Intelligence In Education: Promises and Implications for Teaching and Learning. Boston, MA: Center for Curriculum Redesign. Abimbola, C., Eden, C. A., Chisom, O. N., & Adeniyi, I. S. (2024). Integrating AI in education: Opportunities, challenges, and ethical considerations. Magna Scientia Advanced Research and Reviews. Harry, A. (2023). Role of AI in Education. Interdisciplinary Journal and Humanity (INJURITY). Porayska-Pomsta, K., Holmes, W., & Nemorin, S. (2022). The Ethics of AI in Education. |
11:30am - 12:00pm | Coffee Break Location: Cregan Library Foyer |
11:30am - 1:00pm | Zine Workshop with Bryan Mathers Location: F220 |
|
11:30am - 12:30pm
Heroes & Villains Zine Workshop Visual Thinkery, United Kingdom Everyone has a (visual) story to tell. Participants in this workshop will go on a hands-on journey of discovery - into thinking more visually in the world they already inhabit and the themes of this conference. We'll get wondering about the universal language of cartoons, visual storytelling and a human-centred creative process. This workshop is about communicating ideas - not creating art! We will learn how cartooning and creative listening can be used in day-to-day life. We'll learn how to grow ideas and how to get them to hang around. And together we will explore a more visual approach to communicating anything. We'll stroll through a bluffer's guide to cartooning, explore some creative techniques, and make a lo-fi story zine out of a single piece of paper. Please note the time of this workshop in the timetable and please register in advacne to ensure a spot as places are limited. |
12:00pm - 1:00pm | Invited speakers Location: Seamus Heaney Theatre G114. Session Chair: Kate Molloy |
|
12:00pm - 12:15pm
Building Trust with AI: Practical Approaches for Higher Education University of Queensland, Australia As Generative AI becomes increasingly embedded in higher education, establishing trust among educators, students, and institutions is essential. This session explores practical strategies for integrating AI in ways that foster confidence, transparency, and ethical practice. Drawing on insights from work developing policy and practice and leading teaching innovation, I examine how thoughtful policy development, collaborative approaches between students and staff, and fostering autonomy and peer support can promote responsible AI use while enhancing learning for all. Real-world examples will highlight successes and challenges in embedding AI into teaching practices, incorporating reflections from both students and educators. The presentation will also underscore the importance of collaborative initiatives, such as Communities of Practice (CoPs) and resource-sharing platforms, in building and sustaining trust across academic communities. By focusing on actionable approaches, I share thoughts on opportunities to navigate the complexities of Generative AI in education and inspire trust-driven innovation. 12:15pm - 12:30pm
Embracing Uncertainty, Community and Care in the Ethics of AI and Data in Education: Steps to a Contro-pedagogy of Cruelty Università degli Studi di Padova, Italy The prolific discussion around the ethics of technology has clearly reached the field of education. In this regard, transnational bodies such as the EU, UNESCO, and the OECD have published recommendations and guidelines to promote ethical AI and data use in education (Bosen et al., 2023; Directorate-General for Education, 2022; Molina et al., 2024; OECD & Education International, 2023). However, applied research in various social domains has revealed that the challenge of adopting an ethical approach to AI and data lies not in developing ethical norms but in implementing them. Thinking ethically is distinct from acting ethically (Morley et al., 2023). Moreover, ethical guidelines may even conflict with one another (Tamburrini, 2020, p. 68). Professional and prospective educators may encounter significant challenges when attempting to adopt an ethical approach to technology, often influenced by techno-enthusiastic discourses (Nemorin et al., 2023). The platformization and datafication of education have addressed the attention toward user experience, productivity, and performance under narratives that promote personalization and normalize access to technology as a marker of quality and inclusion (Williamson, 2023). Through Rita Segato’s words (2018), these are the values of a pedagogy of cruelty. Educators and learners frequently perceive ethical frameworks as mere "compliance checklists"(Stewart, 2023), demonstrating limited engagement with, or understanding of the underlying technological infrastructures and vested interests (Hartong & Förschler, 2019) or even full adherence, to survive the system, of the pedagogies of cruelty. Broad critical rules often fail to include explicit activism or actionable strategies (Rose, 2003). To Segato, a contro-pedagogy of cruelty implies embracing human uncertainty. Contrary to the ideals of efficiency and productivity, ethics is a never-ending, imperfect work based on relationships and care. I liaise with Costello’s work (2023), considering that the ethics of care applied to the pedagogical relationship is a first and foundational choice to engage in the ethical debate about technologies (not only what technologies but whether we want them or not in an educational space). If humanity's intricate quest for moral ideals through tangible actions cannot be fully encapsulated by normative prescriptions, is the ethics of AI and data “teachable”? I argue here that overly rigid adherence to checklists—especially when ethics is merely "transmitted" or "taught" in a hierarchical dynamic—is a solution to “keep the ball rolling” in terms of a pedagogy of cruelty. I contend that a contro-pedagogy of cruelty must support actors must identify ethical dilemmas through their own perspectives, reflect on them, and engage in community efforts and values to make moral decisions. Though this is my personal perspective, I will illustrate the concept above by introducing some of the activities envisaged within the project ETH-TECH “Anchoring Ethical Technology (AI and Data) usage in the Education Practice. References Bosen, L.-L., Morales, D., Roser-Chinchilla, J. F., Sabzalieva, E., Valentini, A., Vieira do Nascimento, D., & Yerovi, C. (2023). Harnessing the era of artificial intelligence in higher education: A primer for higher education stakeholders. UNESCO-IESALC. https://unesdoc.unesco.org/ark:/48223/pf0000386670?locale=en Costello, E. (2023). Postdigital Ethics of Care. In P. Jandrić (Ed.), Encyclopedia of Postdigital Science and Education (pp. 1–6). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-35469-4_68-1 Directorate-General for Education, Y. (2022). Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators. Publications Office of the European Union. https://data.europa.eu/doi/10.2766/153756 Hartong, S., & Förschler, A. (2019). Opening the black box of data-based school monitoring: Data infrastructures, flows and practices in state education agencies. Big Data & Society, 6(1), 2053951719853311. https://doi.org/10.1177/2053951719853311 Molina, E., Cobo-Romaní, C., Pineda, J., & Rovner. (2024). Revolución de la IA en la Educación: Lo Que Hay Que Saber. World Bank. https://documents.worldbank.org/en/publication/documents-reports/documentdetail/099355206192434920/IDU18a4e03161fc3d14a691a4dc13642bc9e086a Morley, J., Kinsey, L., Elhalal, A., Garcia, F., Ziosi, M., & Floridi, L. (2023). Operationalising AI ethics: Barriers, enablers and next steps. AI & SOCIETY, 38(1), 411–423. https://doi.org/10.1007/s00146-021-01308-8 Nemorin, S., Vlachidis, A., Ayerakwa, H. M., & Andriotis, P. (2023). AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development. Learning, Media and Technology, 48(1), 38–51. https://doi.org/10.1080/17439884.2022.2095568 OECD, & Education International. (2023). Opportunities, guidelines and guardrails for effective and equitable use of AI in education. OECD Publishing. https://www.oecd.org/education/ceri/Opportunities,%20guidelines%20and%20guardrails%20for%20effective%20and%20equitable%20use%20of%20AI%20in%20education.pdf Rose, E. (2003). The Errors of Thamus: An Analysis of Technology Critique. Bulletin of Science, Technology & Society, 23(3), 147–156. https://doi.org/10.1177/0270467603023003001 Segato, R. L. (2018). Contra-pedagogías de la crueldad. Prometeo Libros. Stewart, B. (2023). Toward an Ethics of Classroom Tools: Educating Educators for Data Literacy. In J. E. Raffaghelli & A. Sangrà (Eds.), Data Cultures in Higher Education: Emergent Practices and the Challenge Ahead (pp. 229–244). Springer International Publishing. https://doi.org/10.1007/978-3-031-24193-2_9 Tamburrini, G. (2020). Etica delle macchine: Dilemmi morali per robotica e intelligenza artificiale. Carocci editore. Williamson, B. (2023). The Social life of AI in Education. International Journal of Artificial Intelligence in Education. https://doi.org/10.1007/s40593-023-00342-5 12:30pm - 12:45pm
There is No Such Thing as an Ethical Black Box Higher Education Authority, Ireland The ‘black box’ signifies systems whose decision-making processes remain opaque, even to those who use and advocate for them most frequently. Higher education should be predicated on openness: the free exchange of knowledge, the cultivation of critical inquiry, and the fostering of transparency in the pursuit of understanding. Any system that operates as a black box, obscuring its processes from scrutiny, stands in fundamental opposition to these tenets. This presentation will argue that ethical implementations of AI and ‘black box’ systems like ChatGPT and Copilot are irreconcilable within the context of higher education. Generative AI systems deployed in educational settings must not only be technically effective but also embody the values of openness and accountability that underpin teaching, learning, and research. When the mechanisms of AI systems are hidden from view, they obstruct the ability of educators and students to critically engage with the technologies shaping their educational experiences. Regardless of any policy or practice, such opacity risks undermining trust, impeding the development of digital literacy, and reinforcing inequities by privileging those with access to proprietary knowledge. Education seeks to empower learners to question, challenge, and contribute to knowledge creation, but this empowerment is impossible when systems operate in secrecy. In higher education, AI must not only be transparent but also participatory, enabling staff and students to understand, critique, and influence its use. Ethical AI in higher education requires a commitment to openness that extends beyond technical explainability to include collaborative development, accessible design, and the rejection of proprietary opacity. In rejecting the notion of an ethical black box, this presentation calls for a paradigm in which transparency, engagement, and equity are central to AI’s role in the academy. |
1:00pm - 2:00pm | Lunch Location: Main Restaurant |
2:00pm - 3:10pm | Afternoon parallel session 1 Location: F205 Session Chair: Michał Wieczorek |
|
Pick and Mix: The Sweet Mix Of AI And AT To Help Students With Academic Challenges. DCU, Ireland AI and Assistive Technology (AT) are evolving and sometimes overalpping and compliemt the inclusive approach of Universal Design for Learning that advocates an inclusive teaching and learning framework. This melting pot of AI and AT can help not only students with disabilities, who can struggle with time management, motivation, procrastination, focusing and notetaking but these can also help our wider student cohort who can have challenges too. These tools can support students with these challenges but curating these technologies is nesserary so students can buiild them into their learning routines. Harnessing the student experience and voice can begin to compile an authentic list of these AI and AT options and enhance your learning journey. The presentation will be about outlining some of these tools and how they support students and be built into their daily practices. Demonstrations of free tools like Goblin.tools, will be used to show how academic tasks can be broken down, paid AT tools like Glean for notetaking will demonstrate how quizzes are formed from lecture content to test the students memory and comprehension of the lecture content. The overall aim of the session is to impart positive awareness of AI for an acdemic context for students and how this merging of AI and AT is creating more options for incliusion that supports the widening student body in Higher Education. Critical AI Literacy Through Critical Virtual Exchange The Open University, United Kingdom Virtual exchange (VE) stands for online collaborative learning between groups of students in different cultural contexts and geographical locations combining the deep impact of intercultural dialogue with the broad reach of digital technologies (EVOLVE, 2020). It offers learning benefits - intercultural communicative competence and digital literacy skills development - across the curriculum. In fact, it is an established ‘internationalisation of the curriculum’/’internationalisation at home’ (IaH) strategy in higher education worldwide (O’Dowd & Beelen, 2021). However, VE and VE-based IaH are not inherently equitable and inclusive. Like other forms of online or blended education, they are prone to Western hegemonies and influenced by inequalities in access to and experience with technology, institutional constraints (e.g., lack of support and incentives for educators), gender, race, age, English language dominance, and socio-political and geopolitical challenges (Helm, 2020). Critical VE (Hauck, 2023) is VE through the social justice and inclusion lens and is informed by critical digital literacy (CDL) which “examines how the operation of power within digital contexts shapes knowledge, identities, social relations, and formations in ways that privilege some and marginalize others” (Darvin, 2017, p. 2). We frame critical AI literacy as a sub-set of CDL and illustrate how CVE provides the ideal educational setting for critical AI literacy skills development for both educators and students allowing them to “gesture towards” (Kerr & Andreotti, 2018; Stein et al., 2020) decolonial VE where participants can engage in thinking “otherwise”(Reljanovic Glimäng, 2022). The approach will be illustrated through several CVE examples where student carried out online collaborative project work that included the critical use and evaluation of GenAI tools and their output. References: Darvin, R. (2017). Language, Ideology, and Critical Digital Literacy. In S. Thorne, & S. May (Eds.), Language, Education and Technology. Encyclopaedia of Language and Education (3rd ed.). Springer, Cham EVOLVE Project Team (2020). The Impact of Virtual Exchange on Student Learning in Higher Education: EVOLVE Project Report. http://hdl.handle.net/11370/d69d9923-8a9c-4b37-91c6-326ebbd14f17Executive Hauck, M. (2023). From Virtual Exchange to Critical Virtual Exchange and Critical Internationalization at Home. In Diversity Abroad, The Global Impact Exchange. https://www.diversitynetwork.org/GlobalImpactExchange Helm, F. (2020). EMI, internationalisation, and the digital. International Journal of Bilingual Education and Bilingualism, 23(3), 314-325. https://doi.org/10.1080/13670050.2019.1643823 O’Dowd, R., & Beelen, J. (2021). Virtual exchange and Internationalisation at Home: navigating the terminology, EAIE Blog & podcast. https://www.eaie.org/blog/virtual-exchange-iah-terminology.html Reljanovic Glimäng, M. (2022). Safe/brave spaces in virtual exchange on sustainability. Journal of Virtual Exchange, 5, 61-81. AI-Based Research Mentors: Plausible Scenarios & Ethical issues 1Dublin City University, Ireland; 2University College Dublin, Ireland Mentorship is considered an important approach in Research Integrity (RI) teaching, e.g. encouraging researchers – the mentees – to act with the highest levels of integrity. However, mentorship is complex, with several known limitations, e.g. a lack of standardisation in mentor training and practice. Recently, a discourse has begun on the benefits of Artificial Intelligence (AI)-based mentors (AIMs), often with authors citing how AIMs may alleviate some of the limitations in current mentorship model. Here, we have focused on the research environment, and how AI-based research mentors (AIRMs) might be used in, and impact on, the area of RI. While the examination of ethical issues with the use of AI across an array of areas is underway, e.g. autonomous vehicles, the identification of the ethical issues with the use of AIRMs is near absent from the literature. Guided by the Anticipatory Technology Ethics (ATE) approach, we have addressed this absence by 1) outlining four plausible future scenarios concerning AIRMs, with a focus on their use and impact in the area of RI, and 2) identifying the ethical issues with such use. Within this talk, we will present the findings from our work to date. Anderson, M. S., Horn, A. S., Risbey, K. R., Ronning, E. A., De Vries, R., & Martinson, B. C. (2007). What Do Mentoring and Training in the Responsible Conduct of Research Have To Do with Scientists’ Misbehavior? Findings from a National Survey of NIH-Funded Scientists. Academic Medicine, 82(9), 853-860. https://doi.org/10.1097/ACM.0b013e31812f764c Brey, P.A.E. Anticipating ethical issues in emerging IT. Ethics Inf Technol 14, 305–317 (2012). https://doi.org/10.1007/s10676-012-9293-y Crean, D., Gordijn, B., & Kearns, A. J. (2023). Teaching research integrity as discussed in research integrity codes: A systematic literature review. Account Res, 1-24. https://doi.org/10.1080/08989621.2023.2282153 Crean, D., Gordijn, B., & Kearns, A. J. (2024). Impact and Assessment of Research Integrity Teaching: A Systematic Literature Review. Science and Engineering Ethics, 30(4), 30. https://doi.org/10.1007/s11948-024-00493-1 Hill, S. E. M., Ward, W. L., Seay, A., & Buzenski, J. (2022). The Nature and Evolution of the Mentoring Relationship in Academic Health Centers. J Clin Psychol Med Settings, 29(3), 557-569. https://doi.org/10.1007/s10880-022-09893-6 Labib, K., Evans, N., Roje, R., Kavouras, P., Reyes Elizondo, A., Kaltenbrunner, W., Buljan, I., Ravn, T., Widdershoven, G., Bouter, L., Charitidis, C., Sørensen, M. P., & Tijdink, J. (2021). Education and training policies for research integrity: Insights from a focus group study. Science and Public Policy, 49(2), 246-266. https://doi.org/10.1093/scipol/scab077 Pizzolato, D., & Dierickx, K. (2023). The Mentor’s Role in Fostering Research Integrity Standards Among New Generations of Researchers: A Review of Empirical Studies. Science and Engineering Ethics, 29(3), 19. https://doi.org/10.1007/s11948-023-00439-z AI and Democratic Education: A Critical Pragmatist Assessment Dublin City University, Ireland Abstract This paper examines the relationship between artificial intelligence and democratic education. AI and other digital technologies are currently being touted for their potential to “democratise” education, even if it is not clear what this would entail (see, e.g., Adel et al., 2024; Kamalov et al., 2023; Kucirkova & Leaton Gray, 2023). By analysing the discourse surrounding educational AI, I distinguish four distinct but interrelated meanings of democratic education: equal access to quality learning, education for living in a democracy, education through democratic practice, and democratic governance of education. I argue that none of these four meanings can render education democratic on its own, and present Dewey’s (1956; 2016) notion of democratic education as integrating these distinct conceptualisations. Dewey emphasises that education needs to provide children with skills and dispositions necessary for democratic living, experience in communication and cooperation, opportunities to codetermine the shape of democratic institutions and education itself, and equal opportunities to participate in learning. By examining today’s commercial AI tools (Holmes & Tuomi, 2022; Khan, 2024), I argue that their emphasis on individualisation of learning, their narrow focus on the mastery of the curriculum, and the drive to automate teachers’ tasks are obstacles to democratic education. I demonstrate that AI deprives children from opportunities to gain experience in democratic living and acquire communicative and collaborative skills and dispositions, while also habituating them to an environment over which they have little or no control, potentially impacting how they will aproach shared problems as democratic citizens. I conclude by outlining some suggestions for making educational AI more in line with a pragmatist notion of democracy and democratic education. References Adel, Amr, Ali Ahsan, and Claire Davison. ‘ChatGPT Promises and Challenges in Education: Computational and Ethical Perspectives’. Education Sciences 14, no. 8 (August 2024): 814. https://doi.org/10.3390/educsci14080814. Dewey, John. The Child and the Curriculum: And The School and Society. University of Chicago Press, 1956. Dewey, John. Democracy and Education. Gorham, Me: Myers Education Press, 2018. Holmes, Wayne, and Ilkka Tuomi. ‘State of the Art and Practice in AI in Education’. European Journal of Education 57, no. 4 (2022): 542–70. https://doi.org/10.1111/ejed.12533. Kamalov, Firuz, David Santandreu Calonge, and Ikhlaas Gurrib. ‘New Era of Artificial Intelligence in Education: Towards a Sustainable Multifaceted Revolution’. Sustainability 15, no. 16 (January 2023): 12451. https://doi.org/10.3390/su151612451. Khan, Salman. Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing). New York: Viking, 2024. Kucirkova, Natalia, and Sandra Leaton Gray. ‘Beyond Personalization: Embracing Democratic Learning Within Artificially Intelligent Systems’. Educational Theory 73, no. 4 (2023): 469–89. https://doi.org/10.1111/edth.12590. |
2:00pm - 3:10pm | Afternoon parallel session 2 Location: F215 Session Chair: James Brunton |
|
Meeting the Opportunities and Challenges of Generative AI through Student Partnership: Co-Creation and the Development of Student GenAI guidelines at Maynooth University. Maynooth University, Ireland With the release of ChatGPT in 2022, many educators and commentators realised the potential of Large Language Models (LLMs) to disrupt education. Much ink has been spent discussing the challenges of these technologies (particularly to academic integrity), while also acknowledging their potential affordances (Cotton, Cotton, and Reuben Shipway, 2023; Mollick and Mollick, 2023; Mollick and Mollick, 2023b). However, the boosterism that has surrounded their release and promotion has skewed discussion concerning the future of GenAI in higher education: speculations about personalized learning, the ‘potential’ affordances of GenAI and pronouncements of rapid uptake in the current and future workplace appeal to the skills agenda which has come to dominant neo-liberal institutions of higher education (for claims of productivity increases see Eloundou, Manning, Mishken & Rock, 2023; see also Microsoft, 2024). Given the confusion about the impact of these technologies, it is no surprise that students are concerned, ill-prepared and poorly informed about their use. This paper focuses on Maynooth University’s response to this challenge: the use of student/staff partnership in the co-creation of its student facing GenAI guidelines. It explores the purpose of the project, its format, reflections on the creation process and finally the guidelines that it produced. In doing so, the paper serves two central purposes: firstly, it adds to existing discussions about the use and misuse of GenAI in higher education; secondly, it argues for the essential role of staff/student co-creation in responding to this. Reference List Cotton, D. R. E., Cotton, P.A., & Reuben Shipway, J. (2023). Chatting and Cheating. Ensuring Academic Integrity in the Era of Chatgpt. [Electronic Version] Innovations in Education and Teaching International, 61(2), 228–239. https://doi-rg.may.idm.oclc.org/10.1080/14703297.2023.2190148 Accessed 29 May 2024. Eloundou, T., Manning, S., Mishken, P., and Rock, D., (2023). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models. [Electronic Version] PREPRINT (Version v). Arxiv. https://doi.org/10.48550/arXiv.2303.10130. Accessed 4 May 2023. Mircosoft (2024). Generative AI in Ireland 2024 – Adoption Rates and Trends. https://pulse.microsoft.com/en-ie/work-productivity-en-ie/na/fa1-generative-ai-adoption-rates-are-on-the-rise-in-workplaces-according-to-our-latest-report-supported-by-trinity-college-dublin/ Accessed 23 May 2024. Mollick, E. R., and Mollick, L. 2023. New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments. [Electronic Version] Preprint, SSRN. https://doi.org/10.2139/ssrn.4300783. Mollick, Ethan R. and Mollick, Lilach (2023b), Using AI to Implement Effective Teaching Strategies in Classrooms: Five Strategies, Including Prompts (March 17, 2023). [Electronic Version] The Wharton School Research Paper, Available at SSRN: https://ssrn.com/abstract=4391243 or http://dx.doi.org/10.2139/ssrn.4391243 Integrating Automated Writing Evaluation with Teacher Feedback: Enhancing Writing Accuracy and Autonomy in Turkish EFL Classrooms Mary Immaculate College, Unıversity of Limerick, Ireland This study explores the impact of integrating Automated Writing Evaluation (AWE) with traditional teacher feedback on the writing performance of Turkish EFL students. Using a quasi-experimental design, the research aims to determine whether the combined use of automated and human feedback can enhance students' writing scores and accuracy more effectively than teacher feedback alone. The study was conducted with 120 undergraduate EFL students, who were divided into an experimental group receiving combined feedback and a control group receiving only teacher feedback. Data were gathered through pre-test and post-test writing tasks, error analysis reports generated by the Criterion AWE tool, and student reflections on their feedback experience. The results indicate that both feedback approaches led to improvements in students' overall writing scores, with no statistically significant difference between the groups in terms of overall performance. However, the experimental group showed a more pronounced reduction in grammatical and mechanical errors, suggesting that the integration of AWE with teacher feedback may be more effective in addressing these specific aspects of writing. Students in the experimental group benefited from the immediate, detailed, and accessible nature of automated feedback, which allowed them to correct errors while their ideas were still fresh. Additionally, participants reported that the promptness and specificity of AWE feedback motivated them to improve their writing. Despite these benefits, some students expressed concerns over the limited focus of automated feedback on content and occasional vague or incorrect advisory messages. The findings underscore the potential of combining AWE systems with traditional feedback to alleviate the burden on teachers by allowing them to focus more on content and higher-order writing concerns. Moreover, the use of automated feedback fosters a more autonomous, learner-centered environment, encouraging students to self-regulate and engage actively in the writing process. The study's results align with existing literature on the advantages of technology-enhanced language learning, highlighting the importance of immediate feedback in facilitating learning and reducing recurrent errors. However, the study also acknowledges limitations, such as the non-randomized sampling, the specific context of English majors, and the absence of a delayed post-test to assess long-term effects. Future research should investigate the integration of AWE in diverse learner groups and instructional contexts, as well as explore its sustained impacts on writing proficiency. Turning Off The Gaslight: The Best Of Scholarly Critical Thinking As A Response To GenAInt's Dystopias various open education NGOs, including Open Washington, Open Oregon, Creative Commons, etc., Italy It is said that under Mussolini, at least the trains in Italy ran on time. Similarly, there are probably some use cases of generative AI [genAInt] which make the world a better place -- some tools using large language models [LLMs] to support learners with disabilities come to mind -- but nearly all proposed uses are actually quite dystopian if one stops to look with a calm but critical eye. Moreover, turning down the gaslighting from big tech companies hoping to justify the hundreds of billions of dollars of investment they hope to continue receiving, it is should be clear that the fundamentals of genAInt are horrible for the global climate, for the lives of creators, for the fundamental ethics of the academy, for the gap between rich and poor, powerful and powerless. In fact, I would argue that in the context of education, one of the deepest wounds the current genAInt hype cycle is inflicting is to fundamentally devalue human knowledge, experience, and expertise: if an LLM can spit out a brand new calculus or art history textbook in an instant, what use are disciplinary experts ... and why would students waste their time building that expertise -- getting an education! -- if they can get the same outputs typing prompts into LLMs? It is not a time to abandon the very idea of education and expertise when major global empires have democratically elected leaders who lied (and continue to lie) to the public about basic history, science, and economics. The hucksters of genAInt solemnly evoke the existential danger of runaway artificial general intelligence -- which honest computer scientists know is as distant a dream today as it was when Alan Turing founded their subject three quarters of a century ago -- while in fact it is the concentration of wealth, destruction of our climate, and attempted destruction of the idea of expertise which truly threaten our world. Instead, it is a perfect time to think critically about what the science of LLMs tells us they really are and really can ever do, to love and use technology where it empowers humans and otherwise makes the world a better place as even the original Luddites did (contrary to the usual connotations of that word; as described in a recent book [Merchant: Blood in the Machine, 2023]) but to hate and fight against technology which steals, surveils, empowers the powerful and disenfranchises the powerless. Schools and universities can protect their communities by strong policies, and nations or transnational associations like the European Union can use strong laws to protect their information ecosystems from the enshittification which genAInt is rolling over the internet like a climate change-energize hurricane. Dueling Discourses on AI in Higher Education: Critically Surfacing Tensions and Grounding Narratives in Context Dublin City University, Ireland Two things can be true at the same time: 1) AI tools represent a rapidly developing area of innovative and disruptive technological advancement that impacts on the operation of higher education programmes and institutions, which demands that this be given attention at every level of higher education institutions; and 2) an unsettling amount of the discourse in and around the use of AI tools in higher education is confused, contradictory, untethered from relevant context, and does more harm than good. This paper will chart tensions between the different ‘AI in higher education’ discourses and will ground them in different, relevant contexts, incorporating reflections on the author’s academic experiences and practices. Discourses on ‘AI in higher education’ frequently occur with AI tools discussed as something new that students are using and staff need to learn about such that they can adapt their teaching and assessment practices to this one technology/set of technologies. These discussions are frequently divorced from any discussion of existing institutional digital competency frameworks and related, strategic, resourced capacity building for staff and students. Discourse also frequently acknowledges the myriad ethical issues with staff and student use of different AI tools, while in the same breath saying that use of AI tools is both unavoidable and desirable. This paper puts forward that any attempt to bring a technology/set of technologies into educational practices should be grounded in that ecosystem of evidence-based capacity building in digital competencies, and a framework for ethical use of technology in teaching and learning. The ‘students are using it, so staff have to embrace it’ discourse is also highly aligned with the techno-deterministic and techno-optimistic narratives utilised by technology/edtech companies in the past, as detailed in critical digital pedagogy scholarship. For example, the recent positioning of edtech companies as potential saviours for institutions during the COVID-19 pandemic, offering technology products that could be adopted as part of a ‘pandemic pedagogy’. This paper puts forward that such narratives should be set in the context of critical scholarship on the potentially long-term consequences of engaging in this type of ‘magic buttonism’ rather than investing in staff capacity building and institutional structures to support staff and student engagement in digital teaching and learning. There is a wealth of research and scholarship detailing that academic staff experience significant levels of stress, burnout, poor work-life balance with working during evenings and weekends being commonplace, and that this has been getting worse over time. Discourse on ‘AI in higher education’ frequently constructs a need for staff to individually upskill in a set of rapidly developing innovative and disruptive technologies, divorced from any acknowledgment of existing, problematic workload practices and related academic culture issues detailed in the literature. This paper puts forward that the practice of higher education institutions devolving ultimate responsibility for complex, systematic issues, such as ‘AI in higher education’ down to the level of the individual academics while simultaneously not addressing issues with academic work role definitions, workload, and underpinning academic culture will only serve to exacerbate the psychosocial hazards of academic work, with consequent, negative occupational health outcomes. Finally, this paper will envision a more hope(punk)ful discourse on AI in higher education, based on a just digital transformation agenda, pedagogies of kindness and care, and open and inclusive educational practices. |
2:00pm - 3:10pm | Afternoon parallel session 3 (workshop) Location: F220 |
|
Writing a GenAI Statement for Programmes, Modules, or Assessments UCC, Ireland As expectations around the use of Generative Artificial Intelligence (GenAI) can vary from discipline to discipline, or even module to module or assessment to assessment, it is important to set clear expectations for students regarding what is considered authorised or unauthorised use of GenAI at programme, module, or assessment level. Taking this a step further, explaining the reasoning behind the chosen approach and relating it to learning outcomes provides useful clarity and transparency for students. Additionally, clarifying the expectations of an assessment can reinforce learning outcomes and skill development. Beyond providing much-needed clarity and transparency for students, the process of drafting a GenAI statement encourages us to think through our goals and intentions and, specifically, whether or not the use of GenAI is appropriate or relevant in our programmes, modules, and/or assessments. This workshop will introduce participants to the key considerations and steps in drafting comprehensive GenAI statements. These steps include:
Workshop participants will be guided through the above steps and will leave the session with a draft of a robust and comprehensive GenAI statement they can use in their own teaching practice. Patricipants should bring assessment briefs, module learning outcomes, rubrics, and any other relevant materials to the workshop for use in drafting their own GenAI statements. References Perkins, M., Furze, L., Roe, J., & MacVaugh, J. (2024). The Artificial Intelligence Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment. Journal of University Teaching and Learning Practice, 21(06), Article 06. https://doi.org/10.53761/q3azde36 Goff, L., Thelen, S., Writing a GenAI Statement. (2024). University College Cork. Retrieved December 4, 2024, from https://www.ucc.ie/en/ethical-use-of-generative-ai-toolkit/academic-integrity/writing-a-genai-statement/ |
2:00pm - 3:10pm | Afternoon parallel session 4 Location: Seamus Heaney Theatre G114. Session Chair: R Lowney |
|
Exploring Information Literacy Competencies of Engineering Students in Their Use of ChatGPT ATU, Ireland Abstract This research investigates the information literacy proficiency levels of engineering students in the context of their use of ChatGPT. With the increasing integration of AI tools like ChatGPT into educational settings, understanding how students engage with and evaluate information is critical. The study employs the DigComp 2.2 framework as a benchmark for measuring information literacy competency, providing a structured approach to assess skills such as information evaluation, and information creation. To contextualise competency levels, the study examines how students interact with ChatGPT in practical scenarios. A mixed-methods approach is adopted to achieve this: a survey collects data on the frequency, purposes, and types of ChatGPT use among students, while semi-structured interviews provide a deeper exploration of their proficiency levels based on specific tasks and decision-making processes. This combination allows for the triangulation of data, ensuring a comprehensive understanding of information literacy within AI use. The findings of this study will offer insights into how engineering students navigate the challenges of information literacy in the digital age, particularly in relation to emerging AI technologies. By identifying competency levels and patterns of use, the research aims to inform educational strategies for enhancing information literacy, ultimately contributing to better preparation of students for the demands of the modern engineering workplace. Students as Co-Designers of Ethical AI Integration in a Post-Primary Computer Science Classroom Dublin City University This presentation examines how students can take an active role in shaping post-AI educational landscapes, emphasising their role as co-designers in defining how generative Artificial Intelligence (genAI) is used to support their learning of programming. Situated in the researcher’s own classroom, this in-depth study takes an ethical approach to exploring the role of genAI in supporting programming education at the post-primary level. Aligned with UNESCO’s call for human-centered research that is “co-designed by teachers, learners, and researchers” (Miao & Holmes, 2023), this study addresses gaps in the literature regarding student-centered approaches in the area of generative AI and novice programming (Stone, 2024). A design-based research (DBR) methodology is employed, contributing theoretically and practically through exploring this novel space (McKenney & Reeves, 2019). Its focus on co-creation is a key factor in choosing this methodology (Anderson & Shattuck, 2012; Barab & Squire, 2004). The research progresses through iterative phases of exploration, construction, and reflection (McKenney & Reeves, 2019). The first phase aims to understand the needs and context of the students and explore how they learn about AI before using it. In the second phase, students act as co-creators to develop pedagogical guidelines to support their use of prompts while learning programming. The third phase involves evaluating the pedagogical guidelines. This ethically grounded approach reflects DBR’s focus on “understanding the messiness of real-world practice” (Barab & Squire, 2004, p. 3). Its iterative nature ensures student participation remains at the core, with refinements made to the pedagogical framework throughout the process. Creativity and design remain central to the DBR approach (Hall, 2020), aligning with the aims and objectives of Leaving Certificate Computer Science (Department of Education, 2023). An overview of the research design will be presented, before sharing preliminary findings from the study’s first phase, offering insights into student attitudes and understandings of generative AI, as well as their use of ChatGPT prompts to support their learning of programming. Paradoxes and dilemmas that surface through the research process will be presented as the researcher engages in a reflexive process (Braun & Clarke, 2013), particularly in balancing ethical considerations with the practical implementation of generative Artificial Intelligence in education. Feedback will be welcomed to inform and refine the next phases of this iterative EdD research. References Anderson, T., & Shattuck, J. (2012). Design-Based Research: A Decade of Progress in Education Research? Educational Researcher, 41(1), 16–25. https://doi.org/10.3102/0013189X11428813 Barab, S., & Squire, K. (2004). Design-Based Research: Putting a Stake in the Ground. Journal of the Learning Sciences, 13(1), 1–14. https://doi.org/10.1207/s15327809jls1301_1 Braun, V., & Clarke, V. (2013). Successful qualitative research: A practical guide for beginners. SAGE. Department of Education. (2023). Leaving Certificate Computer Science Curriculum Specification. https://www.curriculumonline.ie/getmedia/6eaaa05e-a10b-4bae-bd85-99a1ede0cd67/LC-Computer-Science-specification-updated.pdf Hall, T. (2020). Bridging Practice and Theory: The Emerging Potential of Design-based Research (DBR) for Digital Innovation in Education. Education Research and Perspectives: An International Journal, 47, 157–173. McKenney, S. E., & Reeves, T. C. (2019). Conducting educational design research (Second edition). Routledge. Miao, F., & Holmes, W. (2023). Guidance for generative AI in education and research | UNESCO. UNESCO. https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research Stone, I. (2024). Exploring Human-Centered Approaches in Generative AI and Introductory Programming Research: A Scoping Review. Proceedings of the 2024 Conference on United Kingdom & Ireland Computing Education Research, 1–7. https://doi.org/10.1145/3689535.3689553 Rewilding AI Pedagogies with Educational Values Dun Laoghaire Institute of Art, Design and Technology (IADT), Ireland Artificial intelligence (AI) in education is impacting our pedagogical practices (Holmes, 2024; McNamara, 2024). For example, our educational values are being increasingly usurped by the notion that computational processing can be conceptualised as thinking and intelligence. Moreover, these ‘intelligent’ technologies are often presented as superior to human intelligence in terms of speed, efficiency and precision (Selwyn, 2017). Furthermore, these AI algorithms are powerful arbiters of knowledge creation and pedagogical practices through the various ways that they process large streams of online data by way of the classification, creation and dissemination of information and people (Edwards, 2015). This ‘datafication of education’ is situated within an algorithmic culture where everything can be measured and verified against process-driven, goal-oriented pedagogies (Biesta, 2009). However, as Fawns (2018) argues, not everything important is quantifiable. Indeed, this over-reliance on factual data does not adequately consider human ‘value-judgements’ around what is educationally desirable (Biesta, 2009, p.35; O’Leary and Cui, 2020). For example, what cannot be measured is not valued. Thus, the need to rewild our AI pedagogies with more educational values becomes imperative. In response, I propose critical posthuman theory (Braidotti, 2019) to help us think about knowledge and its creation in alternative ways. The posthuman convergence does not position man as its central subject but rather imagines a new collective subject where humans, technology and material matter are inextricably interconnected in and of the world. The posthuman subject is embodied, embedded, relational and differentiated with the capacity to affect and be affected (Braidotti, 2019). The metaphorical figurations of the posthuman subject do not separate the mind from the body, thus thinking capacity cannot be replaced with computational capacity and intelligence is not a fully autonomous force but rather a relational activity. Thus, the embodied and embedded nature of the posthuman subject rejects the instrumental notion of technology. Here, posthuman knowledge cannot be reduced to computational models that adopt an instrumental approach to teaching and learning where human experience is categorised as variables to be counted and processed. This paper is significant in its contribution to how we might collectively rewild AI pedagogies with posthuman values that are more educationally desirable. References Biesta, G. (2009) Good education in an age of measurement: on the need to reconnect with the question of purpose in education. Educational Assessment, Evaluation and Accountability, 21 (1), 33–46. doi.org/10.1007/s11092-008-9064-9 Braidotti, R. (2019) Posthuman knowledge. Cambridge: Polity Press. Edwards, R. (2015) Software and the hidden curriculum in digital education. Pedagogy, Culture & Society, 23 (2), 265–279. doi.org/10.1080/14681366.2014.977809 Fawns, T. (2018) Postdigital education in design and practice. Postdigital Science and Education, 1 (1), 132–145. doi.org/10.1007/s42438-018-0021-8 Holmes, W. (2024). AIED—Coming of Age? International Journal of Artificial Intelligence in Education. 34, 1–11. https://doi.org/10.1007/s40593-023-00352-3 McNamara, D.S. (2024). From Cognitive Simulations to Learning Engineering, with Humans in the Middle. International Journal of Artificial Intelligence in Education. 34, 42–54. https://doi.org/10.1007/s40593-023-00349-y O’Leary, M. and Cui, V. (2020) Reconceptualising teaching and learning in higher education: challenging neoliberal narratives of teaching excellence through collaborative observation. Teaching in Higher Education, 25 (2), 141–156. doi.org/10.1080/13562517.2018.1543262 Selwyn, N. (2017) Education and technology: key issues and debates. London: Bloomsbury. Developing Critical Data Literacy with Undergraduate Students to Counter Datafication DCU, Ireland The areas of learning analytics and critical data literacy are growing in focus in higher education, because both society and higher education are becoming increasingly ‘datafied’ (Atenas, Havemann and Timmermann, 2020; Verständig, 2021), particularly through collection of learner data to inform learning analytics. Critical data literacy for individuals has emerged as a way to counter datafication’s effects (Sander, 2020). It is an important part of a person’s wider digital literacies. With a role of virtual learning environment (VLE) administrator in an Irish university, the author holds a unique perspective on how this particular technology datafies its users. Recognising this, and wider processes of datafication in society, the author sought to respond to calls in the literature for greater critical data literacy education opportunities for students. An educational intervention for undergraduate students in the Education discipline was developed, drawing upon Pangrazio and Selwyn’s (2018) domains of personal data literacies. It provided a space for students to come together and reflect on their technology use and data practices, through facilitated discussion. Students also explored a personal dashboard of their VLE data, developed by the author as ‘an object to think with’ (Papert, 1980) to prompt further reflection. Post-intervention interviews were held to analyse the students’ experience and if their critical data literacy had been fostered. Themes of agency, fairness and critical data literacy emerged. Participants had a positive experience of the intervention, and have changed their practice around technology and data as a result. They would welcome further educational opportunities to develop their critical data literacy, including within their undergraduate studies. This study offers an example of one particular approach to critical data literacy education which shares students’ own data with them. This act of ‘data transparency’ (Prinsloo and Slade, 2015) with students can encourage the university to practice it more widely. References Atenas, J., Havemann, L. and Timmermann, C. (2020) ‘Critical literacies for a datafied society: academic development and curriculum design in higher education’, Research in Learning Technology, 28(0). Available at: https://doi.org/10.25304/rlt.v28.2468. Pangrazio, L. and Selwyn, N. (2019) ‘“Personal data literacies”: A critical literacies approach to enhancing understandings of personal digital data’, New Media & Society, 21(2), pp. 419–437. Papert, S. (1980) Mindstorms: Children, Computers, and Powerful Ideas. Prinsloo, P. and Slade, S. (2015) ‘Student privacy self-management: implications for learning analytics’, in Proceedings of the Fifth International Conference on Learning Analytics And Knowledge. LAK ’15: the 5th International Learning Analytics and Knowledge Conference, Poughkeepsie New York: ACM, pp. 83–92. Sander, I. (2020) ‘Critical big data literacy tools—Engaging citizens and promoting empowered internet usage’, Data & Policy, 2. Available at: https://doi.org/10.1017/dap.2020.5. Verständig, D. (2021) ‘Critical Data Studies and Data Science in Higher Education: An interdisciplinary and explorative approach towards a critical data literacy’, Seminar.net, 17(2). Available at: https://doi.org/10.7577/seminar.4397. |
3:10pm - 4:00pm | Gasta session Location: Seamus Heaney Theatre G114. Session Chair: Tom Farrelly |
|
In These Golden Years? Education Under Pressure University of Windsor, Canada Gasta! Hype vs Reality For Tech's Sake, Ireland Gasta! Bias and Misinformation in the AI Age: Why Critical Thinking Matters in the Classroom Dublin City University, Ireland The rise of AI is transforming the way we access, process, and engage with information. While these innovations offer significant potential, they also come with the risks of misinformation and biases. As AI becomes more integrated into education, this talk highlights the need to equip students with the skills to critically evaluate and analyse the information they receive, beyond simply using AI tools. It calls for a shift in teaching practices that can develop both their critical thinking and digital literacy. By rethinking assignments and redesigning evaluation methods, educators can develop a generation of learners who are not only technologically proficient but also reflective, insightful, and adaptable, ensuring that students are prepared to succeed in an era of rapid technological change and information overload. Pay Attention Dublin City University, Ireland How technology and media hijack our attention and why we need to stop it Antinomic Thinking, Generative AI and Online Quizzes South East Technological University, Ireland Antinomy is a situation in which two statements or beliefs that are both reasonable seem to contradict. In five minutes, we’ll try to explore how generative AI is both an assistance and a threat to student learning with online quizzes. GASTA (Great Authentic Strong Transversal Assessment) - The Case For Interactive Oral Assessments Dublin City University, Ireland Close your eyes and imagine the world in which your students will work. They might be nurses, engineers, teachers, business people or some other profession. Imagine them going about their daily task and their line manager asking them to complete a task, on their own, in a room with only a pen and paper and no access to any external resources. Hard to imagine, right? So why do some people think this is a good way of assessing students? While there may be a veneer of protecting academic integrity, is this a sufficient reason for assessing students with an invigilated, time-limited, closed-book exam context? While not the answer to all your assessment problems, Interactive Oral (IO) assessments might be a suitable alternative approach for you. IO assessments are genuine, free-flowing and unscripted interactions between a student and a marker based on a real-life scenario. They give students an opportunity to showcase their knowledge and engage them in an authentic way that prepares them for professional life. IOs are academically robust, strong in terms of academic integrity, good at assessing students' transversal skills and good for student engagement. They can be used across a range of disciplines at all stages of the learning journey. There is an upfront load involved in designing and developing resources for IO assessments, but the benefits to academics and students make them well worth the effort. Could they claim the accolade of being GASTA - a Great Authentic Strong Transversal Assessment? One Wild And Precious Life Computers in Education Society of Ireland, Ireland This Gasta presentation will be 4m 59s beginning and ending by shining the light of poet Mary Oliver's most famous question on each of our intentions for our 'wild and precious life' - online. Contents in between will depend on what happens between now and upload deadline. |
4:05pm - 4:55pm | Closing keynote address with Professor Martin Weller Location: Seamus Heaney Theatre G114. |
|
AI, metaphors and ecosystems The Open University, United Kingdom Artificial Intelligence creates an unpredictable future for many in higher education. When faced with uncertainty, metaphors provide a useful method to consider possibilties, solutions and impacts by transferring unerstanding from a known domain to the new one. This talk will consider the role of metaphors in undretsanding the impact of AI, particularly focusing on the concept of the information ecosystem. Metaphors of invasive species and control of ecosystems will be explored to examine possible responses to the advent of AI in the higher eductaion information ecosystem. |
Contact and Legal Notice · Contact Address: Privacy Statement · Conference: Education after the algorithm |
Conference Software: ConfTool Pro 2.6.153 © 2001–2025 by Dr. H. Weinreich, Hamburg, Germany |