Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 10th May 2025, 03:18:34am IST

 
Only Sessions at Location/Venue 
 
 
Session Overview
Location: Seamus Heaney Theatre G114.
Cregan Library Building
Date: Friday, 21/Feb/2025
9:10am - 9:20amWelcome and Opening
Location: Seamus Heaney Theatre G114.
9:20am - 9:55amOpening keynote address with Dr Helen Beetham
Location: Seamus Heaney Theatre G114.
 

Hacking And Rewilding As Educational Resistance

Helen Beetham

University of Manchester, United Kingdom

There can be no education 'after the algorithm' if we understand the algorithm as a cultural heuristic for individual activity. Like technology itself, algorithmic practice may be integral to being human. Education develops new cultural actors who can renew as well as reproducing the algorithms of the past.

But we are living, Helen argues, through a wave of cultural dispossession. Cultural artefacts and practices have been extensively digitised, and access to them is mediated through a few powerful digital platforms. Data subjection has become the price of cultural participation. In generative AI models, we see cultural artefacts being captured wholesale in data architectures that promise powerful capabilities, but with a new price: our ability to think outside the box. In this keynote, Helen considers reasons for education to resist the power of the algorithm in this particular form. From interviews with educators on both sides of the line, she brings forward two modes of resistance - hacking and rewilding - and suggests that education needs both, in dialogue with each other, if it is to be a space for cultural renewal.

 
10:00am - 11:25amMorning parallel session 4
Location: Seamus Heaney Theatre G114.
Session Chair: Peter Tiernan
 
10:00am - 10:15am

Algorithm to Empathy: Transforming Social Care Education with VR Caregivers

Perry Share, John Pender

Atlantic Technological University, Ireland

In this presentation, we propose a new approach to social care education that can help prepare practitioners for a post-AI and post-social robotics era. By moving beyond ‘the algorithm’, virtual reality [VR] caregivers may harness immersive interactions that circumvent many limitations of physical social robots - cost, logistics, acceptance - and can be easily updated. We highlight a four-session curriculum to show how future social workers can imagine, debate and design VR-based solutions that centre on empathy, user-friendliness and cultural sensitivity. This approach encourages social care students to develop deeper insights into the lived experiences of care recipients, such as older adults or individuals with dementia, and into the concept of ‘care’ itself. We also examine vital ethical, privacy and accessibility concerns, ensuring that VR-driven solutions remain person-centred and equitable. Ultimately, with our students, we wish to explore a potential blended future where AI complements, rather than supplants, human care. In doing so, we aim to open up a conversation on how can - or even should - higher education meaningfully integrate immersive technologies for next-generation care. We welcome further discussion and collaboration.



10:15am - 10:30am

Preparing Future Teachers for the AI Era: Exploring AI Readiness, Perspectives, and Literacy in Initial Teacher Education

Declan Qualter, Eileen Bowman, Rachel Farrell

University College Dublin, Ireland

The integration of Artificial Intelligence (AI) into education presents both significant opportunities and challenges, particularly for Initial Teacher Education (ITE) programmes tasked with preparing future teachers for its effective and ethical use. However, varying levels of AI readiness among student teachers—encompassing their knowledge, skills, and attitudes toward AI—complicate this process. Drawing on Schepman and Rodway’s (2020) work on AI readiness, this conceptual paper introduces the ‘kaleidoscope of AI perspectives,’ a reflective framework designed to deepen awareness of the varied dispositions that influence AI adoption and use in educational contexts.

The paper explores the intersection of AI readiness with the UNESCO AI Competency Framework, offering a structured, dynamic approach to developing AI literacy within ITE. Central to this discussion is the debate over whether AI literacy should be treated as a distinct area or integrated into broader digital literacy frameworks (Holmes, 2022). Additionally, the paper examines where and how AI literacy could be incorporated into ITE programmes, providing actionable recommendations for its inclusion.

The authors argue that embedding AI literacy into ITE is critical for equipping future teachers to navigate and employ AI responsibly, ethically, and effectively in educational contexts. This proactive measure is positioned as essential, given AI’s growing influence in education (EC, 2022). By fostering an informed and critical mindset, the proposed framework aims to prepare teachers not only to use AI technologies but also to understand and question their implications for teaching, learning, and equity.

European Commission. (2022). Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators. Publications Office of the European Union. https://data.europa.eu/doi/10.2766/153756

Holmes, W., Persson, J., Chounta, I.-A., Wasson, B., & Dimitrova, V. (2022). Artificial intelligence and education: A critical view through the lens of human rights, democracy, and the rule of law. The Council of Europe. ES428045_PREMS 092922 GBR 2517 AI and Education TXT 16x24.pdf

Schepman, A., & Rodway, P. (2020). Initial validation of the general attitudes towards artificial intelligence scale. Computers in Human Behavior Reports, 1, 100014. https://doi.org/10.1016/j.chbr.2020.100014

UNESCO. (2024). AI competency framework for students. Paris: United Nations Educational, Scientific and Cultural Organisation. https://doi.org/10.54675/ZJTE2084



10:30am - 10:50am

Pre-Service Teachers’ Experiences and Perceptions of Generative Artificial Intelligence: An International Comparative Study

Hsiaoping Hsu1, Arolina Torrejon Capurro2, Janice Mak2, Jennifer Werner2, Janel White-Taylor2, Melissa Geiselhofer2

1Dublin City University, Ireland; 2Arizona State University, USA

Generative Artificial Intelligence (GenAI) is reshaping education, creating both opportunities and challenges for teacher education programs (Mishra et al., 2024). As pre-service teachers increasingly engage with GenAI tools like ChatGPT, understanding how institutional and regional contexts shape their experiences and perceptions of GenAI is critical (Celik et al., 2022; Moorhouse & Kohnke, 2024). As part of an international collaborative design-based research project (Hsu et al., 2024), this study compares the experiences and perceptions of pre-service teachers at Dublin City University (DCU) with those of students majoring in education or enrolled in postgraduate education programs at Arizona State University (ASU) regarding the use of GenAI for personal, academic, and professional purposes. This research aims to inform the development of targeted training programs tailored to the specific needs of each institution.

Data were collected from 204 DCU participants and 127 ASU participants using a questionnaire with items on a 5-point Likert scale. The survey examined the application of GenAI for personal, academic, and professional purposes, as well as participants’ perceptions of its opportunities, challenges, ethical concerns, and professional development needs.

DCU participants reported slightly higher experience levels with GenAI (M = 2.94, SD = 1.46) than ASU participants (M = 2.80, SD = 1.32), though the difference was not statistically significant. Moreover, DCU participants reported significantly more frequent use of GenAI tools (M = 2.43, SD = 1.38) than ASU participants (M = 2.17, SD = 1.14; p < .05, Cohen’s d = 0.21), reflecting a small-to-medium effect size. Both groups recognised GenAI’s opportunities for enhancing teaching and learning, with DCU participants scoring slightly higher (M = 3.47, SD = 0.79) than ASU participants (M = 3.41, SD = 0.90), although a non-significant result. Furthermore, ASU participants perceived slightly more challenges (M = 3.31, SD = 0.85) than DCU participants (M = 3.18, SD = 0.95), but this difference was also insignificant. Significant differences were observed in ethical considerations, with ASU participants expressing more significant concerns (M = 3.38, SD = 0.69) compared to DCU participants (M = 3.07, SD = 0.91; p < .001, Cohen’s d = 0.36), suggesting a medium effect size. Regarding professional development, DCU participants reported a significantly greater need for training on effective use (M = 3.60, SD = 1.10) than ASU participants (M = 3.24, SD = 1.02; p < .005, Cohen’s d = 0.34), also indicating a medium effect size. They further expressed a significantly higher need for training on ethical use (M = 4.24, SD = 0.97) compared to ASU participants (M = 4.00, SD = 0.87; p < .05, Cohen’s d = 0.25), reflecting a small-to-medium effect size.

This study underscores the importance of tailoring GenAI-focused professional development to specific institutional contexts, addressing distinct strengths and challenges. The findings highlight the practical significance of these differences on equipping future educators with the skills to leverage AI responsibly in diverse educational institutions.

References

Celik, I., Dindar, M., Muukkonen, H., & Järvelä, S. (2022). The Promises and Challenges of Artificial Intelligence for Teachers: a Systematic Review of Research. TechTrends, 66(4), 616-630. https://doi.org/10.1007/s11528-022-00715-y

Hsu, H.-P., Mak, J., Werner, J., White-Taylor, J., Geiselhofer, M., Gorman, A., & Torrejon Capurro, C. (2024). Preliminary Study on Pre-Service Teachers’ Applications and Perceptions of Generative Artificial Intelligence for Lesson Planning. Journal of Technology and Teacher Education, 32(3), 409-437.

Mishra, P., Oster, N., & Henriksen, D. (2024). Generative AI, Teacher Knowledge and Educational Research: Bridging Short- and Long-Term Perspectives. TechTrends, 68(2), 205-210. https://doi.org/10.1007/s11528-024-00938-1

Moorhouse, B. L., & Kohnke, L. (2024). The effects of generative AI on initial language teacher education: The perceptions of teacher educators. System, 122, 103290. https://doi.org/https://doi.org/10.1016/j.system.2024.103290



10:50am - 11:05am

Integrating Generative AI into WebQuest Methodology to Enhance Digital and Information Literacy in Pre-Service Teacher Education

Peter Tiernan, Enda Donlon

Dublin City University, Ireland

As artificial intelligence (AI) technologies, particularly generative AI (GenAI), become increasingly prevalent, their implications for education grow more profound. Tools such as ChatGPT offer immediate access to an array of synthesised information, potentially reshaping how students interact with knowledge. However, this accessibility also presents challenges for educators, especially concerning the authenticity, reliability, and educational value of AI-generated content. This paper explores a novel approach to developing digital and information literacy skills in pre-service post-primary teachers through a WebQuest methodology enhanced with GenAI tools. Originally designed to help students engage critically with web-based resources through structured, inquiry-based learning (Dodge, 1997), the WebQuest methodology provides a scaffolded framework that can be adapted to include GenAI, enabling students to build skills in both traditional and AI-mediated research.

In this study, we introduce a modified WebQuest designed specifically to engage pre-service secondary teachers with digital literacy in the age of AI. This offers a critical opportunity for students to analyse, question, and contrast information from multiple sources. The modified WebQuest structure begins with an introduction to the topic. Through a selection of curated, reliable resources, including journal articles, vetted websites, and other digital resources, students initially conduct traditional research on the topic. Following this, they engage with GenAI tools by posing questions to explore AI’s capacity to generate information, summarise topics, and provide answers. By comparing AI-generated responses with traditional resources, students gain a deeper understanding of the accuracy, reliability, and potential biases inherent in AI systems.

To support this comparative approach, we developed two evaluation rubrics to encourage both self-reflection and structured assessment. The student self-evaluation rubric emphasises self-awareness in evaluating one’s accuracy in understanding content, depth of analysis, and ability to critically reflect on GenAI-generated responses versus traditional sources. For instance, students assess how AI responses align or diverge from journal articles and other verified sources, examining discrepancies or biases they uncover. This process of reflection helps students understand the affordances and limitations of using AI in an educational context, fostering reflection on their digital and information literacy skills.

The instructor evaluation rubric complements the student-focused assessment by emphasising pedagogical and analytical competencies. This rubric evaluates students on their understanding of WebQuest topic, effectiveness in comparing sources, and depth of insight in their final analyses. Additionally, it assesses how well students articulate their reflections on the role of GenAI, as well as the clarity and coherence of their final presentation or report. By incorporating both self-assessment and instructor-led assessment, this approach fosters a holistic development of digital and information literacy skills, equipping future educators with a critical toolkit for navigating AI in the classroom (Holmes, Bialik, & Fadel, 2019; Webber, 2018).

This integration of GenAI into WebQuest methodology represents a significant pedagogical development, as it enables pre-service teachers to engage with AI while honing essential skills in evaluating information. Given the rapid pace at which AI is reshaping information access, understanding the affordances and limitations of AI becomes essential. Through the proposed methodology, pre-service teachers are guided in developing a critical approach to AI-mediated information. This paper contributes to the conversation on AI and education by offering a framework for using AI tools within an established educational methodology, fostering a digitally literate and discerning generation capable of navigating an AI-driven world.

References

Dodge, B. (1997). Some thoughts about WebQuests. San Diego State University. Retrieved from http://webquest.sdsu.edu/about_webquests.html

Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Boston, MA: Center for Curriculum Redesign.

Webber, S. (2018). The impact of artificial intelligence on information literacy. Journal of Information Literacy, 12(2), 1-15.



11:05am - 11:20am

Teachers' Perceptions on the Impact of AI - a Report from the PAIDEIA Erasmus+ Project

Peter Tiernan, Enda Donlon

Dublin City University, Ireland

Introduction

As AI technologies continue to advance, they open up new avenues for educators — from content creation and automation of administrative tasks to data-driven insights. This raises questions regarding the role of AI in education, and the ethical implications of its integration. This report, part of the PAIDEIA project funded by the Erasmus Plus Programme, delves into the perspectives of educators on the impact of AI in education, now and in the future. It provides an analysis of both the opportunities and challenges presented by AI, offering a range of perspectives from teachers across seven European countries.

Methodology

The research employs a mixed-methods approach, encompassing surveys and focus groups to gather comprehensive insights. Over 700 teachers from Belgium, Bulgaria, Ireland, Italy, Malta, Spain, and Türkiye participated, providing a diverse view of their current use of AI and their perceptions of AI in educational settings. Surveys were conducted first to establish baseline data, followed by focus groups that allowed for deeper exploration of themes.

Findings

The findings indicate that AI usage in education among PAIDEIA partner countries is generally low to moderate, with significant variation in how and where AI is applied. AI is sporadically used for tasks like lesson planning, personalising learning, and content creation, while areas such as assessment, feedback, and administrative tasks see even less support through AI tools. Countries like Bulgaria and Ireland show higher adoption of AI to enhance learning experiences, whereas usage in Belgium, Spain, and Türkiye remains minimal. Understanding of AI among teachers also varies widely; while most teachers grasp basic AI principles and ethical considerations, many lack confidence in explaining AI processes, staying current with advancements, and applying AI effectively in educational settings. Teachers across PAIDEIA countries identify challenges such as the reliability of AI-generated information and data privacy issues, with mixed views on whether AI might undermine educational equity, diminish the teacher’s role, or impact teacher-student dynamics. Despite these concerns, educators are generally optimistic about AI’s potential to personalise learning, innovate teaching methods, and engage students, though Italian teachers expressed some hesitancy around these benefits. Teachers’ perceptions of students’ views on AI reveal mixed enthusiasm and awareness, with students generally seen as curious but unclear about AI’s benefits and potential ethical issues. There is broad agreement on the need for mandatory AI training for teachers, with insufficient training provisions noted across countries. Opinions are mixed regarding the adequacy of CPD opportunities, confidence in pursuing further training, and access to online AI resources. Overall, the findings highlight a need for structured, accessible training on AI in education, with a strong emphasis on practical applications, ethical considerations, and tailored CPD resources to build teacher confidence and capacity for AI integration.

Conclusion

This research provides important insights into teachers’ perceptions of AI in education, revealing that while usage is quite low, teachers recognise the opportunities AI may bring in the future. However, it also highlights the need to address ethical concerns associated with AI, alongside the potential negative effect it may have on student creativity and critical thinking. The study strongly emphasises the need for comprehensive training programs for educators and clear guidelines on the use of AI in educational settings.

References

Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial Intelligence In Education: Promises and Implications for Teaching and Learning. Boston, MA: Center for Curriculum Redesign.

Abimbola, C., Eden, C. A., Chisom, O. N., & Adeniyi, I. S. (2024). Integrating AI in education: Opportunities, challenges, and ethical considerations. Magna Scientia Advanced Research and Reviews.

Harry, A. (2023). Role of AI in Education. Interdisciplinary Journal and Humanity (INJURITY).

Porayska-Pomsta, K., Holmes, W., & Nemorin, S. (2022). The Ethics of AI in Education.

 
12:00pm - 1:00pmInvited speakers
Location: Seamus Heaney Theatre G114.
Session Chair: Kate Molloy
 
12:00pm - 12:15pm

Building Trust with AI: Practical Approaches for Higher Education

Rachel Fitzgerald

University of Queensland, Australia

As Generative AI becomes increasingly embedded in higher education, establishing trust among educators, students, and institutions is essential. This session explores practical strategies for integrating AI in ways that foster confidence, transparency, and ethical practice. Drawing on insights from work developing policy and practice and leading teaching innovation, I examine how thoughtful policy development, collaborative approaches between students and staff, and fostering autonomy and peer support can promote responsible AI use while enhancing learning for all.

Real-world examples will highlight successes and challenges in embedding AI into teaching practices, incorporating reflections from both students and educators. The presentation will also underscore the importance of collaborative initiatives, such as Communities of Practice (CoPs) and resource-sharing platforms, in building and sustaining trust across academic communities. By focusing on actionable approaches, I share thoughts on opportunities to navigate the complexities of Generative AI in education and inspire trust-driven innovation.



12:15pm - 12:30pm

Embracing Uncertainty, Community and Care in the Ethics of AI and Data in Education: Steps to a Contro-pedagogy of Cruelty

Juliana Elisa Raffaghelli

Università degli Studi di Padova, Italy

The prolific discussion around the ethics of technology has clearly reached the field of education. In this regard, transnational bodies such as the EU, UNESCO, and the OECD have published recommendations and guidelines to promote ethical AI and data use in education (Bosen et al., 2023; Directorate-General for Education, 2022; Molina et al., 2024; OECD & Education International, 2023). However, applied research in various social domains has revealed that the challenge of adopting an ethical approach to AI and data lies not in developing ethical norms but in implementing them. Thinking ethically is distinct from acting ethically (Morley et al., 2023). Moreover, ethical guidelines may even conflict with one another (Tamburrini, 2020, p. 68).

Professional and prospective educators may encounter significant challenges when attempting to adopt an ethical approach to technology, often influenced by techno-enthusiastic discourses (Nemorin et al., 2023). The platformization and datafication of education have addressed the attention toward user experience, productivity, and performance under narratives that promote personalization and normalize access to technology as a marker of quality and inclusion (Williamson, 2023). Through Rita Segato’s words (2018), these are the values of a pedagogy of cruelty. Educators and learners frequently perceive ethical frameworks as mere "compliance checklists"(Stewart, 2023), demonstrating limited engagement with, or understanding of the underlying technological infrastructures and vested interests (Hartong & Förschler, 2019) or even full adherence, to survive the system, of the pedagogies of cruelty. Broad critical rules often fail to include explicit activism or actionable strategies (Rose, 2003).

To Segato, a contro-pedagogy of cruelty implies embracing human uncertainty. Contrary to the ideals of efficiency and productivity, ethics is a never-ending, imperfect work based on relationships and care. I liaise with Costello’s work (2023), considering that the ethics of care applied to the pedagogical relationship is a first and foundational choice to engage in the ethical debate about technologies (not only what technologies but whether we want them or not in an educational space).

If humanity's intricate quest for moral ideals through tangible actions cannot be fully encapsulated by normative prescriptions, is the ethics of AI and data “teachable”?

I argue here that overly rigid adherence to checklists—especially when ethics is merely "transmitted" or "taught" in a hierarchical dynamic—is a solution to “keep the ball rolling” in terms of a pedagogy of cruelty. I contend that a contro-pedagogy of cruelty must support actors must identify ethical dilemmas through their own perspectives, reflect on them, and engage in community efforts and values to make moral decisions. Though this is my personal perspective, I will illustrate the concept above by introducing some of the activities envisaged within the project ETH-TECH “Anchoring Ethical Technology (AI and Data) usage in the Education Practice.

References

Bosen, L.-L., Morales, D., Roser-Chinchilla, J. F., Sabzalieva, E., Valentini, A., Vieira do Nascimento, D., & Yerovi, C. (2023). Harnessing the era of artificial intelligence in higher education: A primer for higher education stakeholders. UNESCO-IESALC. https://unesdoc.unesco.org/ark:/48223/pf0000386670?locale=en

Costello, E. (2023). Postdigital Ethics of Care. In P. Jandrić (Ed.), Encyclopedia of Postdigital Science and Education (pp. 1–6). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-35469-4_68-1

Directorate-General for Education, Y. (2022). Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators. Publications Office of the European Union. https://data.europa.eu/doi/10.2766/153756

Hartong, S., & Förschler, A. (2019). Opening the black box of data-based school monitoring: Data infrastructures, flows and practices in state education agencies. Big Data & Society, 6(1), 2053951719853311. https://doi.org/10.1177/2053951719853311

Molina, E., Cobo-Romaní, C., Pineda, J., & Rovner. (2024). Revolución de la IA en la Educación: Lo Que Hay Que Saber. World Bank. https://documents.worldbank.org/en/publication/documents-reports/documentdetail/099355206192434920/IDU18a4e03161fc3d14a691a4dc13642bc9e086a

Morley, J., Kinsey, L., Elhalal, A., Garcia, F., Ziosi, M., & Floridi, L. (2023). Operationalising AI ethics: Barriers, enablers and next steps. AI & SOCIETY, 38(1), 411–423. https://doi.org/10.1007/s00146-021-01308-8

Nemorin, S., Vlachidis, A., Ayerakwa, H. M., & Andriotis, P. (2023). AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development. Learning, Media and Technology, 48(1), 38–51. https://doi.org/10.1080/17439884.2022.2095568

OECD, & Education International. (2023). Opportunities, guidelines and guardrails for effective and equitable use of AI in education. OECD Publishing. https://www.oecd.org/education/ceri/Opportunities,%20guidelines%20and%20guardrails%20for%20effective%20and%20equitable%20use%20of%20AI%20in%20education.pdf

Rose, E. (2003). The Errors of Thamus: An Analysis of Technology Critique. Bulletin of Science, Technology & Society, 23(3), 147–156. https://doi.org/10.1177/0270467603023003001

Segato, R. L. (2018). Contra-pedagogías de la crueldad. Prometeo Libros.

Stewart, B. (2023). Toward an Ethics of Classroom Tools: Educating Educators for Data Literacy. In J. E. Raffaghelli & A. Sangrà (Eds.), Data Cultures in Higher Education: Emergent Practices and the Challenge Ahead (pp. 229–244). Springer International Publishing. https://doi.org/10.1007/978-3-031-24193-2_9

Tamburrini, G. (2020). Etica delle macchine: Dilemmi morali per robotica e intelligenza artificiale. Carocci editore.

Williamson, B. (2023). The Social life of AI in Education. International Journal of Artificial Intelligence in Education. https://doi.org/10.1007/s40593-023-00342-5



12:30pm - 12:45pm

There is No Such Thing as an Ethical Black Box

James O'Sullivan

Higher Education Authority, Ireland

The ‘black box’ signifies systems whose decision-making processes remain opaque, even to those who use and advocate for them most frequently. Higher education should be predicated on openness: the free exchange of knowledge, the cultivation of critical inquiry, and the fostering of transparency in the pursuit of understanding. Any system that operates as a black box, obscuring its processes from scrutiny, stands in fundamental opposition to these tenets.

This presentation will argue that ethical implementations of AI and ‘black box’ systems like ChatGPT and Copilot are irreconcilable within the context of higher education. Generative AI systems deployed in educational settings must not only be technically effective but also embody the values of openness and accountability that underpin teaching, learning, and research. When the mechanisms of AI systems are hidden from view, they obstruct the ability of educators and students to critically engage with the technologies shaping their educational experiences. Regardless of any policy or practice, such opacity risks undermining trust, impeding the development of digital literacy, and reinforcing inequities by privileging those with access to proprietary knowledge.

Education seeks to empower learners to question, challenge, and contribute to knowledge creation, but this empowerment is impossible when systems operate in secrecy. In higher education, AI must not only be transparent but also participatory, enabling staff and students to understand, critique, and influence its use.

Ethical AI in higher education requires a commitment to openness that extends beyond technical explainability to include collaborative development, accessible design, and the rejection of proprietary opacity. In rejecting the notion of an ethical black box, this presentation calls for a paradigm in which transparency, engagement, and equity are central to AI’s role in the academy.

 
2:00pm - 3:10pmAfternoon parallel session 4
Location: Seamus Heaney Theatre G114.
Session Chair: R Lowney
 

Exploring Information Literacy Competencies of Engineering Students in Their Use of ChatGPT

Rudie Coppieters

ATU, Ireland

Abstract

This research investigates the information literacy proficiency levels of engineering students in the context of their use of ChatGPT. With the increasing integration of AI tools like ChatGPT into educational settings, understanding how students engage with and evaluate information is critical. The study employs the DigComp 2.2 framework as a benchmark for measuring information literacy competency, providing a structured approach to assess skills such as information evaluation, and information creation.

To contextualise competency levels, the study examines how students interact with ChatGPT in practical scenarios. A mixed-methods approach is adopted to achieve this: a survey collects data on the frequency, purposes, and types of ChatGPT use among students, while semi-structured interviews provide a deeper exploration of their proficiency levels based on specific tasks and decision-making processes. This combination allows for the triangulation of data, ensuring a comprehensive understanding of information literacy within AI use.

The findings of this study will offer insights into how engineering students navigate the challenges of information literacy in the digital age, particularly in relation to emerging AI technologies. By identifying competency levels and patterns of use, the research aims to inform educational strategies for enhancing information literacy, ultimately contributing to better preparation of students for the demands of the modern engineering workplace.



Students as Co-Designers of Ethical AI Integration in a Post-Primary Computer Science Classroom

Irene Stone

Dublin City University

This presentation examines how students can take an active role in shaping post-AI educational landscapes, emphasising their role as co-designers in defining how generative Artificial Intelligence (genAI) is used to support their learning of programming. Situated in the researcher’s own classroom, this in-depth study takes an ethical approach to exploring the role of genAI in supporting programming education at the post-primary level.

Aligned with UNESCO’s call for human-centered research that is “co-designed by teachers, learners, and researchers” (Miao & Holmes, 2023), this study addresses gaps in the literature regarding student-centered approaches in the area of generative AI and novice programming (Stone, 2024). A design-based research (DBR) methodology is employed, contributing theoretically and practically through exploring this novel space (McKenney & Reeves, 2019). Its focus on co-creation is a key factor in choosing this methodology (Anderson & Shattuck, 2012; Barab & Squire, 2004).

The research progresses through iterative phases of exploration, construction, and reflection (McKenney & Reeves, 2019). The first phase aims to understand the needs and context of the students and explore how they learn about AI before using it. In the second phase, students act as co-creators to develop pedagogical guidelines to support their use of prompts while learning programming. The third phase involves evaluating the pedagogical guidelines. This ethically grounded approach reflects DBR’s focus on “understanding the messiness of real-world practice” (Barab & Squire, 2004, p. 3). Its iterative nature ensures student participation remains at the core, with refinements made to the pedagogical framework throughout the process. Creativity and design remain central to the DBR approach (Hall, 2020), aligning with the aims and objectives of Leaving Certificate Computer Science (Department of Education, 2023).

An overview of the research design will be presented, before sharing preliminary findings from the study’s first phase, offering insights into student attitudes and understandings of generative AI, as well as their use of ChatGPT prompts to support their learning of programming. Paradoxes and dilemmas that surface through the research process will be presented as the researcher engages in a reflexive process (Braun & Clarke, 2013), particularly in balancing ethical considerations with the practical implementation of generative Artificial Intelligence in education. Feedback will be welcomed to inform and refine the next phases of this iterative EdD research.

References

Anderson, T., & Shattuck, J. (2012). Design-Based Research: A Decade of Progress in Education Research? Educational Researcher, 41(1), 16–25. https://doi.org/10.3102/0013189X11428813

Barab, S., & Squire, K. (2004). Design-Based Research: Putting a Stake in the Ground. Journal of the Learning Sciences, 13(1), 1–14. https://doi.org/10.1207/s15327809jls1301_1

Braun, V., & Clarke, V. (2013). Successful qualitative research: A practical guide for beginners. SAGE.

Department of Education. (2023). Leaving Certificate Computer Science Curriculum Specification. https://www.curriculumonline.ie/getmedia/6eaaa05e-a10b-4bae-bd85-99a1ede0cd67/LC-Computer-Science-specification-updated.pdf

Hall, T. (2020). Bridging Practice and Theory: The Emerging Potential of Design-based Research (DBR) for Digital Innovation in Education. Education Research and Perspectives: An International Journal, 47, 157–173.

McKenney, S. E., & Reeves, T. C. (2019). Conducting educational design research (Second edition). Routledge.

Miao, F., & Holmes, W. (2023). Guidance for generative AI in education and research | UNESCO. UNESCO. https://www.unesco.org/en/articles/guidance-generative-ai-education-and-research

Stone, I. (2024). Exploring Human-Centered Approaches in Generative AI and Introductory Programming Research: A Scoping Review. Proceedings of the 2024 Conference on United Kingdom & Ireland Computing Education Research, 1–7. https://doi.org/10.1145/3689535.3689553



Rewilding AI Pedagogies with Educational Values

Patricia Gibson

Dun Laoghaire Institute of Art, Design and Technology (IADT), Ireland

Artificial intelligence (AI) in education is impacting our pedagogical practices (Holmes, 2024; McNamara, 2024). For example, our educational values are being increasingly usurped by the notion that computational processing can be conceptualised as thinking and intelligence. Moreover, these ‘intelligent’ technologies are often presented as superior to human intelligence in terms of speed, efficiency and precision (Selwyn, 2017). Furthermore, these AI algorithms are powerful arbiters of knowledge creation and pedagogical practices through the various ways that they process large streams of online data by way of the classification, creation and dissemination of information and people (Edwards, 2015). This ‘datafication of education’ is situated within an algorithmic culture where everything can be measured and verified against process-driven, goal-oriented pedagogies (Biesta, 2009). However, as Fawns (2018) argues, not everything important is quantifiable. Indeed, this over-reliance on factual data does not adequately consider human ‘value-judgements’ around what is educationally desirable (Biesta, 2009, p.35; O’Leary and Cui, 2020). For example, what cannot be measured is not valued. Thus, the need to rewild our AI pedagogies with more educational values becomes imperative. In response, I propose critical posthuman theory (Braidotti, 2019) to help us think about knowledge and its creation in alternative ways. The posthuman convergence does not position man as its central subject but rather imagines a new collective subject where humans, technology and material matter are inextricably interconnected in and of the world. The posthuman subject is embodied, embedded, relational and differentiated with the capacity to affect and be affected (Braidotti, 2019). The metaphorical figurations of the posthuman subject do not separate the mind from the body, thus thinking capacity cannot be replaced with computational capacity and intelligence is not a fully autonomous force but rather a relational activity. Thus, the embodied and embedded nature of the posthuman subject rejects the instrumental notion of technology. Here, posthuman knowledge cannot be reduced to computational models that adopt an instrumental approach to teaching and learning where human experience is categorised as variables to be counted and processed. This paper is significant in its contribution to how we might collectively rewild AI pedagogies with posthuman values that are more educationally desirable.

References

Biesta, G. (2009) Good education in an age of measurement: on the need to reconnect with the question of purpose in education. Educational Assessment, Evaluation and Accountability, 21 (1), 33–46. doi.org/10.1007/s11092-008-9064-9

Braidotti, R. (2019) Posthuman knowledge. Cambridge: Polity Press.

Edwards, R. (2015) Software and the hidden curriculum in digital education. Pedagogy, Culture & Society, 23 (2), 265–279. doi.org/10.1080/14681366.2014.977809

Fawns, T. (2018) Postdigital education in design and practice. Postdigital Science and Education, 1 (1), 132–145. doi.org/10.1007/s42438-018-0021-8

Holmes, W. (2024). AIED—Coming of Age? International Journal of Artificial Intelligence in Education. 34, 1–11. https://doi.org/10.1007/s40593-023-00352-3

McNamara, D.S. (2024). From Cognitive Simulations to Learning Engineering, with Humans in the Middle. International Journal of Artificial Intelligence in Education. 34, 42–54. https://doi.org/10.1007/s40593-023-00349-y

O’Leary, M. and Cui, V. (2020) Reconceptualising teaching and learning in higher education: challenging neoliberal narratives of teaching excellence through collaborative observation. Teaching in Higher Education, 25 (2), 141–156. doi.org/10.1080/13562517.2018.1543262

Selwyn, N. (2017) Education and technology: key issues and debates. London: Bloomsbury.



Developing Critical Data Literacy with Undergraduate Students to Counter Datafication

R Lowney

DCU, Ireland

The areas of learning analytics and critical data literacy are growing in focus in higher education, because both society and higher education are becoming increasingly ‘datafied’ (Atenas, Havemann and Timmermann, 2020; Verständig, 2021), particularly through collection of learner data to inform learning analytics. Critical data literacy for individuals has emerged as a way to counter datafication’s effects (Sander, 2020). It is an important part of a person’s wider digital literacies.

With a role of virtual learning environment (VLE) administrator in an Irish university, the author holds a unique perspective on how this particular technology datafies its users. Recognising this, and wider processes of datafication in society, the author sought to respond to calls in the literature for greater critical data literacy education opportunities for students.

An educational intervention for undergraduate students in the Education discipline was developed, drawing upon Pangrazio and Selwyn’s (2018) domains of personal data literacies. It provided a space for students to come together and reflect on their technology use and data practices, through facilitated discussion. Students also explored a personal dashboard of their VLE data, developed by the author as ‘an object to think with’ (Papert, 1980) to prompt further reflection.

Post-intervention interviews were held to analyse the students’ experience and if their critical data literacy had been fostered. Themes of agency, fairness and critical data literacy emerged. Participants had a positive experience of the intervention, and have changed their practice around technology and data as a result. They would welcome further educational opportunities to develop their critical data literacy, including within their undergraduate studies.

This study offers an example of one particular approach to critical data literacy education which shares students’ own data with them. This act of ‘data transparency’ (Prinsloo and Slade, 2015) with students can encourage the university to practice it more widely.

References

Atenas, J., Havemann, L. and Timmermann, C. (2020) ‘Critical literacies for a datafied society: academic development and curriculum design in higher education’, Research in Learning Technology, 28(0). Available at: https://doi.org/10.25304/rlt.v28.2468.

Pangrazio, L. and Selwyn, N. (2019) ‘“Personal data literacies”: A critical literacies approach to enhancing understandings of personal digital data’, New Media & Society, 21(2), pp. 419–437.

Papert, S. (1980) Mindstorms: Children, Computers, and Powerful Ideas.

Prinsloo, P. and Slade, S. (2015) ‘Student privacy self-management: implications for learning analytics’, in Proceedings of the Fifth International Conference on Learning Analytics And Knowledge. LAK ’15: the 5th International Learning Analytics and Knowledge Conference, Poughkeepsie New York: ACM, pp. 83–92.

Sander, I. (2020) ‘Critical big data literacy tools—Engaging citizens and promoting empowered internet usage’, Data & Policy, 2. Available at: https://doi.org/10.1017/dap.2020.5.

Verständig, D. (2021) ‘Critical Data Studies and Data Science in Higher Education: An interdisciplinary and explorative approach towards a critical data literacy’, Seminar.net, 17(2). Available at: https://doi.org/10.7577/seminar.4397.

 
3:10pm - 4:00pmGasta session
Location: Seamus Heaney Theatre G114.
Session Chair: Tom Farrelly
 

In These Golden Years? Education Under Pressure

Bonnie Stewart

University of Windsor, Canada

Gasta!



Hype vs Reality

Elaine Burke

For Tech's Sake, Ireland

Gasta!



Bias and Misinformation in the AI Age: Why Critical Thinking Matters in the Classroom

Khetam Al Sharou

Dublin City University, Ireland

The rise of AI is transforming the way we access, process, and engage with information. While these innovations offer significant potential, they also come with the risks of misinformation and biases. As AI becomes more integrated into education, this talk highlights the need to equip students with the skills to critically evaluate and analyse the information they receive, beyond simply using AI tools. It calls for a shift in teaching practices that can develop both their critical thinking and digital literacy. By rethinking assignments and redesigning evaluation methods, educators can develop a generation of learners who are not only technologically proficient but also reflective, insightful, and adaptable, ensuring that students are prepared to succeed in an era of rapid technological change and information overload.



Pay Attention

Eileen Culloty

Dublin City University, Ireland

How technology and media hijack our attention and why we need to stop it



Antinomic Thinking, Generative AI and Online Quizzes

Damien Raftery

South East Technological University, Ireland

Antinomy is a situation in which two statements or beliefs that are both reasonable seem to contradict. In five minutes, we’ll try to explore how generative AI is both an assistance and a threat to student learning with online quizzes.



GASTA (Great Authentic Strong Transversal Assessment) - The Case For Interactive Oral Assessments

Monica Ward

Dublin City University, Ireland

Close your eyes and imagine the world in which your students will work. They might be nurses, engineers, teachers, business people or some other profession. Imagine them going about their daily task and their line manager asking them to complete a task, on their own, in a room with only a pen and paper and no access to any external resources. Hard to imagine, right? So why do some people think this is a good way of assessing students? While there may be a veneer of protecting academic integrity, is this a sufficient reason for assessing students with an invigilated, time-limited, closed-book exam context?

While not the answer to all your assessment problems, Interactive Oral (IO) assessments might be a suitable alternative approach for you. IO assessments are genuine, free-flowing and unscripted interactions between a student and a marker based on a real-life scenario. They give students an opportunity to showcase their knowledge and engage them in an authentic way that prepares them for professional life. IOs are academically robust, strong in terms of academic integrity, good at assessing students' transversal skills and good for student engagement. They can be used across a range of disciplines at all stages of the learning journey. There is an upfront load involved in designing and developing resources for IO assessments, but the benefits to academics and students make them well worth the effort. Could they claim the accolade of being GASTA - a Great Authentic Strong Transversal Assessment?



One Wild And Precious Life

Mags Amond

Computers in Education Society of Ireland, Ireland

This Gasta presentation will be 4m 59s beginning and ending by shining the light of poet Mary Oliver's most famous question on each of our intentions for our 'wild and precious life' - online. Contents in between will depend on what happens between now and upload deadline.

 
4:05pm - 4:55pmClosing keynote address with Professor Martin Weller
Location: Seamus Heaney Theatre G114.
 

AI, metaphors and ecosystems

Martin Weller

The Open University, United Kingdom

Artificial Intelligence creates an unpredictable future for many in higher education. When faced with uncertainty, metaphors provide a useful method to consider possibilties, solutions and impacts by transferring unerstanding from a known domain to the new one. This talk will consider the role of metaphors in undretsanding the impact of AI, particularly focusing on the concept of the information ecosystem. Metaphors of invasive species and control of ecosystems will be explored to examine possible responses to the advent of AI in the higher eductaion information ecosystem.

 

 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: Education after the algorithm
Conference Software: ConfTool Pro 2.6.153
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany