Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 10th May 2025, 03:26:37am IST

 
Only Sessions at Location/Venue 
 
 
Session Overview
Session
Invited speakers
Time:
Friday, 21/Feb/2025:
12:00pm - 1:00pm

Session Chair: Kate Molloy
Location: Seamus Heaney Theatre G114.

Cregan Library Building

Show help for 'Increase or decrease the abstract text size'
Presentations
12:00pm - 12:15pm

Building Trust with AI: Practical Approaches for Higher Education

Rachel Fitzgerald

University of Queensland, Australia

As Generative AI becomes increasingly embedded in higher education, establishing trust among educators, students, and institutions is essential. This session explores practical strategies for integrating AI in ways that foster confidence, transparency, and ethical practice. Drawing on insights from work developing policy and practice and leading teaching innovation, I examine how thoughtful policy development, collaborative approaches between students and staff, and fostering autonomy and peer support can promote responsible AI use while enhancing learning for all.

Real-world examples will highlight successes and challenges in embedding AI into teaching practices, incorporating reflections from both students and educators. The presentation will also underscore the importance of collaborative initiatives, such as Communities of Practice (CoPs) and resource-sharing platforms, in building and sustaining trust across academic communities. By focusing on actionable approaches, I share thoughts on opportunities to navigate the complexities of Generative AI in education and inspire trust-driven innovation.



12:15pm - 12:30pm

Embracing Uncertainty, Community and Care in the Ethics of AI and Data in Education: Steps to a Contro-pedagogy of Cruelty

Juliana Elisa Raffaghelli

Università degli Studi di Padova, Italy

The prolific discussion around the ethics of technology has clearly reached the field of education. In this regard, transnational bodies such as the EU, UNESCO, and the OECD have published recommendations and guidelines to promote ethical AI and data use in education (Bosen et al., 2023; Directorate-General for Education, 2022; Molina et al., 2024; OECD & Education International, 2023). However, applied research in various social domains has revealed that the challenge of adopting an ethical approach to AI and data lies not in developing ethical norms but in implementing them. Thinking ethically is distinct from acting ethically (Morley et al., 2023). Moreover, ethical guidelines may even conflict with one another (Tamburrini, 2020, p. 68).

Professional and prospective educators may encounter significant challenges when attempting to adopt an ethical approach to technology, often influenced by techno-enthusiastic discourses (Nemorin et al., 2023). The platformization and datafication of education have addressed the attention toward user experience, productivity, and performance under narratives that promote personalization and normalize access to technology as a marker of quality and inclusion (Williamson, 2023). Through Rita Segato’s words (2018), these are the values of a pedagogy of cruelty. Educators and learners frequently perceive ethical frameworks as mere "compliance checklists"(Stewart, 2023), demonstrating limited engagement with, or understanding of the underlying technological infrastructures and vested interests (Hartong & Förschler, 2019) or even full adherence, to survive the system, of the pedagogies of cruelty. Broad critical rules often fail to include explicit activism or actionable strategies (Rose, 2003).

To Segato, a contro-pedagogy of cruelty implies embracing human uncertainty. Contrary to the ideals of efficiency and productivity, ethics is a never-ending, imperfect work based on relationships and care. I liaise with Costello’s work (2023), considering that the ethics of care applied to the pedagogical relationship is a first and foundational choice to engage in the ethical debate about technologies (not only what technologies but whether we want them or not in an educational space).

If humanity's intricate quest for moral ideals through tangible actions cannot be fully encapsulated by normative prescriptions, is the ethics of AI and data “teachable”?

I argue here that overly rigid adherence to checklists—especially when ethics is merely "transmitted" or "taught" in a hierarchical dynamic—is a solution to “keep the ball rolling” in terms of a pedagogy of cruelty. I contend that a contro-pedagogy of cruelty must support actors must identify ethical dilemmas through their own perspectives, reflect on them, and engage in community efforts and values to make moral decisions. Though this is my personal perspective, I will illustrate the concept above by introducing some of the activities envisaged within the project ETH-TECH “Anchoring Ethical Technology (AI and Data) usage in the Education Practice.

References

Bosen, L.-L., Morales, D., Roser-Chinchilla, J. F., Sabzalieva, E., Valentini, A., Vieira do Nascimento, D., & Yerovi, C. (2023). Harnessing the era of artificial intelligence in higher education: A primer for higher education stakeholders. UNESCO-IESALC. https://unesdoc.unesco.org/ark:/48223/pf0000386670?locale=en

Costello, E. (2023). Postdigital Ethics of Care. In P. Jandrić (Ed.), Encyclopedia of Postdigital Science and Education (pp. 1–6). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-35469-4_68-1

Directorate-General for Education, Y. (2022). Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators. Publications Office of the European Union. https://data.europa.eu/doi/10.2766/153756

Hartong, S., & Förschler, A. (2019). Opening the black box of data-based school monitoring: Data infrastructures, flows and practices in state education agencies. Big Data & Society, 6(1), 2053951719853311. https://doi.org/10.1177/2053951719853311

Molina, E., Cobo-Romaní, C., Pineda, J., & Rovner. (2024). Revolución de la IA en la Educación: Lo Que Hay Que Saber. World Bank. https://documents.worldbank.org/en/publication/documents-reports/documentdetail/099355206192434920/IDU18a4e03161fc3d14a691a4dc13642bc9e086a

Morley, J., Kinsey, L., Elhalal, A., Garcia, F., Ziosi, M., & Floridi, L. (2023). Operationalising AI ethics: Barriers, enablers and next steps. AI & SOCIETY, 38(1), 411–423. https://doi.org/10.1007/s00146-021-01308-8

Nemorin, S., Vlachidis, A., Ayerakwa, H. M., & Andriotis, P. (2023). AI hyped? A horizon scan of discourse on artificial intelligence in education (AIED) and development. Learning, Media and Technology, 48(1), 38–51. https://doi.org/10.1080/17439884.2022.2095568

OECD, & Education International. (2023). Opportunities, guidelines and guardrails for effective and equitable use of AI in education. OECD Publishing. https://www.oecd.org/education/ceri/Opportunities,%20guidelines%20and%20guardrails%20for%20effective%20and%20equitable%20use%20of%20AI%20in%20education.pdf

Rose, E. (2003). The Errors of Thamus: An Analysis of Technology Critique. Bulletin of Science, Technology & Society, 23(3), 147–156. https://doi.org/10.1177/0270467603023003001

Segato, R. L. (2018). Contra-pedagogías de la crueldad. Prometeo Libros.

Stewart, B. (2023). Toward an Ethics of Classroom Tools: Educating Educators for Data Literacy. In J. E. Raffaghelli & A. Sangrà (Eds.), Data Cultures in Higher Education: Emergent Practices and the Challenge Ahead (pp. 229–244). Springer International Publishing. https://doi.org/10.1007/978-3-031-24193-2_9

Tamburrini, G. (2020). Etica delle macchine: Dilemmi morali per robotica e intelligenza artificiale. Carocci editore.

Williamson, B. (2023). The Social life of AI in Education. International Journal of Artificial Intelligence in Education. https://doi.org/10.1007/s40593-023-00342-5



12:30pm - 12:45pm

There is No Such Thing as an Ethical Black Box

James O'Sullivan

Higher Education Authority, Ireland

The ‘black box’ signifies systems whose decision-making processes remain opaque, even to those who use and advocate for them most frequently. Higher education should be predicated on openness: the free exchange of knowledge, the cultivation of critical inquiry, and the fostering of transparency in the pursuit of understanding. Any system that operates as a black box, obscuring its processes from scrutiny, stands in fundamental opposition to these tenets.

This presentation will argue that ethical implementations of AI and ‘black box’ systems like ChatGPT and Copilot are irreconcilable within the context of higher education. Generative AI systems deployed in educational settings must not only be technically effective but also embody the values of openness and accountability that underpin teaching, learning, and research. When the mechanisms of AI systems are hidden from view, they obstruct the ability of educators and students to critically engage with the technologies shaping their educational experiences. Regardless of any policy or practice, such opacity risks undermining trust, impeding the development of digital literacy, and reinforcing inequities by privileging those with access to proprietary knowledge.

Education seeks to empower learners to question, challenge, and contribute to knowledge creation, but this empowerment is impossible when systems operate in secrecy. In higher education, AI must not only be transparent but also participatory, enabling staff and students to understand, critique, and influence its use.

Ethical AI in higher education requires a commitment to openness that extends beyond technical explainability to include collaborative development, accessible design, and the rejection of proprietary opacity. In rejecting the notion of an ethical black box, this presentation calls for a paradigm in which transparency, engagement, and equity are central to AI’s role in the academy.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: Education after the algorithm
Conference Software: ConfTool Pro 2.6.153
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany