Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 10th May 2025, 01:01:33am IST

 
Only Sessions at Location/Venue 
 
 
Session Overview
Session
Afternoon parallel session 1
Time:
Friday, 21/Feb/2025:
2:00pm - 3:10pm

Session Chair: Michał Wieczorek
Location: F205


Show help for 'Increase or decrease the abstract text size'
Presentations

Pick and Mix: The Sweet Mix Of AI And AT To Help Students With Academic Challenges.

Trevor Boland

DCU, Ireland

AI and Assistive Technology (AT) are evolving and sometimes overalpping and compliemt the inclusive approach of Universal Design for Learning that advocates an inclusive teaching and learning framework. This melting pot of AI and AT can help not only students with disabilities, who can struggle with time management, motivation, procrastination, focusing and notetaking but these can also help our wider student cohort who can have challenges too. These tools can support students with these challenges but curating these technologies is nesserary so students can buiild them into their learning routines. Harnessing the student experience and voice can begin to compile an authentic list of these AI and AT options and enhance your learning journey. The presentation will be about outlining some of these tools and how they support students and be built into their daily practices. Demonstrations of free tools like Goblin.tools, will be used to show how academic tasks can be broken down, paid AT tools like Glean for notetaking will demonstrate how quizzes are formed from lecture content to test the students memory and comprehension of the lecture content. The overall aim of the session is to impart positive awareness of AI for an acdemic context for students and how this merging of AI and AT is creating more options for incliusion that supports the widening student body in Higher Education.



Critical AI Literacy Through Critical Virtual Exchange

Mirjam Hauck

The Open University, United Kingdom

Virtual exchange (VE) stands for online collaborative learning between groups of students in different cultural contexts and geographical locations combining the deep impact of intercultural dialogue with the broad reach of digital technologies (EVOLVE, 2020). It offers learning benefits - intercultural communicative competence and digital literacy skills development - across the curriculum. In fact, it is an established ‘internationalisation of the curriculum’/’internationalisation at home’ (IaH) strategy in higher education worldwide (O’Dowd & Beelen, 2021). However, VE and VE-based IaH are not inherently equitable and inclusive. Like other forms of online or blended education, they are prone to Western hegemonies and influenced by inequalities in access to and experience with technology, institutional constraints (e.g., lack of support and incentives for educators), gender, race, age, English language dominance, and socio-political and geopolitical challenges (Helm, 2020).

Critical VE (Hauck, 2023) is VE through the social justice and inclusion lens and is informed by critical digital literacy (CDL) which “examines how the operation of power within digital contexts shapes knowledge, identities, social relations, and formations in ways that privilege some and marginalize others” (Darvin, 2017, p. 2). We frame critical AI literacy as a sub-set of CDL and illustrate how CVE provides the ideal educational setting for critical AI literacy skills development for both educators and students allowing them to “gesture towards” (Kerr & Andreotti, 2018; Stein et al., 2020) decolonial VE where participants can engage in thinking “otherwise”(Reljanovic Glimäng, 2022). The approach will be illustrated through several CVE examples where student carried out online collaborative project work that included the critical use and evaluation of GenAI tools and their output.

References:

Darvin, R. (2017). Language, Ideology, and Critical Digital Literacy. In S. Thorne, & S. May (Eds.), Language, Education and Technology. Encyclopaedia of Language and Education (3rd ed.). Springer, Cham

EVOLVE Project Team (2020). The Impact of Virtual Exchange on Student Learning in Higher Education: EVOLVE Project Report. http://hdl.handle.net/11370/d69d9923-8a9c-4b37-91c6-326ebbd14f17Executive

Hauck, M. (2023). From Virtual Exchange to Critical Virtual Exchange and Critical Internationalization at Home. In Diversity Abroad, The Global Impact Exchange. https://www.diversitynetwork.org/GlobalImpactExchange

Helm, F. (2020). EMI, internationalisation, and the digital. International Journal of Bilingual Education and Bilingualism, 23(3), 314-325. https://doi.org/10.1080/13670050.2019.1643823

O’Dowd, R., & Beelen, J. (2021). Virtual exchange and Internationalisation at Home: navigating the terminology, EAIE Blog & podcast. https://www.eaie.org/blog/virtual-exchange-iah-terminology.html

Reljanovic Glimäng, M. (2022). Safe/brave spaces in virtual exchange on sustainability. Journal of Virtual Exchange, 5, 61-81.



AI-Based Research Mentors: Plausible Scenarios & Ethical issues

Daniel crean1,2, Michał Wieczorek1, Bert Gordijn1, Alan Kearns1

1Dublin City University, Ireland; 2University College Dublin, Ireland

Mentorship is considered an important approach in Research Integrity (RI) teaching, e.g. encouraging researchers – the mentees – to act with the highest levels of integrity. However, mentorship is complex, with several known limitations, e.g. a lack of standardisation in mentor training and practice. Recently, a discourse has begun on the benefits of Artificial Intelligence (AI)-based mentors (AIMs), often with authors citing how AIMs may alleviate some of the limitations in current mentorship model. Here, we have focused on the research environment, and how AI-based research mentors (AIRMs) might be used in, and impact on, the area of RI. While the examination of ethical issues with the use of AI across an array of areas is underway, e.g. autonomous vehicles, the identification of the ethical issues with the use of AIRMs is near absent from the literature. Guided by the Anticipatory Technology Ethics (ATE) approach, we have addressed this absence by 1) outlining four plausible future scenarios concerning AIRMs, with a focus on their use and impact in the area of RI, and 2) identifying the ethical issues with such use. Within this talk, we will present the findings from our work to date.

Anderson, M. S., Horn, A. S., Risbey, K. R., Ronning, E. A., De Vries, R., & Martinson, B. C. (2007). What Do Mentoring and Training in the Responsible Conduct of Research Have To Do with Scientists’ Misbehavior? Findings from a National Survey of NIH-Funded Scientists. Academic Medicine, 82(9), 853-860. https://doi.org/10.1097/ACM.0b013e31812f764c

Brey, P.A.E. Anticipating ethical issues in emerging IT. Ethics Inf Technol 14, 305–317 (2012). https://doi.org/10.1007/s10676-012-9293-y

Crean, D., Gordijn, B., & Kearns, A. J. (2023). Teaching research integrity as discussed in research integrity codes: A systematic literature review. Account Res, 1-24. https://doi.org/10.1080/08989621.2023.2282153

Crean, D., Gordijn, B., & Kearns, A. J. (2024). Impact and Assessment of Research Integrity Teaching: A Systematic Literature Review. Science and Engineering Ethics, 30(4), 30. https://doi.org/10.1007/s11948-024-00493-1

Hill, S. E. M., Ward, W. L., Seay, A., & Buzenski, J. (2022). The Nature and Evolution of the Mentoring Relationship in Academic Health Centers. J Clin Psychol Med Settings, 29(3), 557-569. https://doi.org/10.1007/s10880-022-09893-6

Labib, K., Evans, N., Roje, R., Kavouras, P., Reyes Elizondo, A., Kaltenbrunner, W., Buljan, I., Ravn, T., Widdershoven, G., Bouter, L., Charitidis, C., Sørensen, M. P., & Tijdink, J. (2021). Education and training policies for research integrity: Insights from a focus group study. Science and Public Policy, 49(2), 246-266. https://doi.org/10.1093/scipol/scab077

Pizzolato, D., & Dierickx, K. (2023). The Mentor’s Role in Fostering Research Integrity Standards Among New Generations of Researchers: A Review of Empirical Studies. Science and Engineering Ethics, 29(3), 19. https://doi.org/10.1007/s11948-023-00439-z



AI and Democratic Education: A Critical Pragmatist Assessment

Michał Wieczorek

Dublin City University, Ireland

Abstract

This paper examines the relationship between artificial intelligence and democratic education. AI and other digital technologies are currently being touted for their potential to “democratise” education, even if it is not clear what this would entail (see, e.g., Adel et al., 2024; Kamalov et al., 2023; Kucirkova & Leaton Gray, 2023). By analysing the discourse surrounding educational AI, I distinguish four distinct but interrelated meanings of democratic education: equal access to quality learning, education for living in a democracy, education through democratic practice, and democratic governance of education. I argue that none of these four meanings can render education democratic on its own, and present Dewey’s (1956; 2016) notion of democratic education as integrating these distinct conceptualisations. Dewey emphasises that education needs to provide children with skills and dispositions necessary for democratic living, experience in communication and cooperation, opportunities to codetermine the shape of democratic institutions and education itself, and equal opportunities to participate in learning. By examining today’s commercial AI tools (Holmes & Tuomi, 2022; Khan, 2024), I argue that their emphasis on individualisation of learning, their narrow focus on the mastery of the curriculum, and the drive to automate teachers’ tasks are obstacles to democratic education. I demonstrate that AI deprives children from opportunities to gain experience in democratic living and acquire communicative and collaborative skills and dispositions, while also habituating them to an environment over which they have little or no control, potentially impacting how they will aproach shared problems as democratic citizens. I conclude by outlining some suggestions for making educational AI more in line with a pragmatist notion of democracy and democratic education.

References

Adel, Amr, Ali Ahsan, and Claire Davison. ‘ChatGPT Promises and Challenges in Education: Computational and Ethical Perspectives’. Education Sciences 14, no. 8 (August 2024): 814. https://doi.org/10.3390/educsci14080814.

Dewey, John. The Child and the Curriculum: And The School and Society. University of Chicago Press, 1956.

Dewey, John. Democracy and Education. Gorham, Me: Myers Education Press, 2018.

Holmes, Wayne, and Ilkka Tuomi. ‘State of the Art and Practice in AI in Education’. European Journal of Education 57, no. 4 (2022): 542–70. https://doi.org/10.1111/ejed.12533.

Kamalov, Firuz, David Santandreu Calonge, and Ikhlaas Gurrib. ‘New Era of Artificial Intelligence in Education: Towards a Sustainable Multifaceted Revolution’. Sustainability 15, no. 16 (January 2023): 12451. https://doi.org/10.3390/su151612451.

Khan, Salman. Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing). New York: Viking, 2024.

Kucirkova, Natalia, and Sandra Leaton Gray. ‘Beyond Personalization: Embracing Democratic Learning Within Artificially Intelligent Systems’. Educational Theory 73, no. 4 (2023): 469–89. https://doi.org/10.1111/edth.12590.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: Education after the algorithm
Conference Software: ConfTool Pro 2.6.153
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany