Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 10th May 2025, 12:55:11am IST

 
Only Sessions at Location/Venue 
 
 
Session Overview
Session
Afternoon parallel session 2
Time:
Friday, 21/Feb/2025:
2:00pm - 3:10pm

Session Chair: James Brunton
Location: F215


Show help for 'Increase or decrease the abstract text size'
Presentations

Meeting the Opportunities and Challenges of Generative AI through Student Partnership: Co-Creation and the Development of Student GenAI guidelines at Maynooth University.

Adrian Kirwan, Aisling Flynn, Alan Waldron, Stephen McCarthy

Maynooth University, Ireland

With the release of ChatGPT in 2022, many educators and commentators realised the potential of Large Language Models (LLMs) to disrupt education. Much ink has been spent discussing the challenges of these technologies (particularly to academic integrity), while also acknowledging their potential affordances (Cotton, Cotton, and Reuben Shipway, 2023; Mollick and Mollick, 2023; Mollick and Mollick, 2023b). However, the boosterism that has surrounded their release and promotion has skewed discussion concerning the future of GenAI in higher education: speculations about personalized learning, the ‘potential’ affordances of GenAI and pronouncements of rapid uptake in the current and future workplace appeal to the skills agenda which has come to dominant neo-liberal institutions of higher education (for claims of productivity increases see Eloundou, Manning, Mishken & Rock, 2023; see also Microsoft, 2024). Given the confusion about the impact of these technologies, it is no surprise that students are concerned, ill-prepared and poorly informed about their use. This paper focuses on Maynooth University’s response to this challenge: the use of student/staff partnership in the co-creation of its student facing GenAI guidelines. It explores the purpose of the project, its format, reflections on the creation process and finally the guidelines that it produced. In doing so, the paper serves two central purposes: firstly, it adds to existing discussions about the use and misuse of GenAI in higher education; secondly, it argues for the essential role of staff/student co-creation in responding to this.

Reference List

Cotton, D. R. E., Cotton, P.A., & Reuben Shipway, J. (2023). Chatting and Cheating. Ensuring Academic Integrity in the Era of Chatgpt. [Electronic Version] Innovations in Education and Teaching International, 61(2), 228–239. https://doi-rg.may.idm.oclc.org/10.1080/14703297.2023.2190148 Accessed 29 May 2024.

Eloundou, T., Manning, S., Mishken, P., and Rock, D., (2023). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models. [Electronic Version] PREPRINT (Version v). Arxiv. https://doi.org/10.48550/arXiv.2303.10130. Accessed 4 May 2023.

Mircosoft (2024). Generative AI in Ireland 2024 – Adoption Rates and Trends. https://pulse.microsoft.com/en-ie/work-productivity-en-ie/na/fa1-generative-ai-adoption-rates-are-on-the-rise-in-workplaces-according-to-our-latest-report-supported-by-trinity-college-dublin/ Accessed 23 May 2024.

Mollick, E. R., and Mollick, L. 2023. New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments. [Electronic Version] Preprint, SSRN. https://doi.org/10.2139/ssrn.4300783.

Mollick, Ethan R. and Mollick, Lilach (2023b), Using AI to Implement Effective Teaching Strategies in Classrooms: Five Strategies, Including Prompts (March 17, 2023). [Electronic Version] The Wharton School Research Paper, Available at SSRN: https://ssrn.com/abstract=4391243 or http://dx.doi.org/10.2139/ssrn.4391243



Integrating Automated Writing Evaluation with Teacher Feedback: Enhancing Writing Accuracy and Autonomy in Turkish EFL Classrooms

Aysegul Liman Kaban

Mary Immaculate College, Unıversity of Limerick, Ireland

This study explores the impact of integrating Automated Writing Evaluation (AWE) with traditional teacher feedback on the writing performance of Turkish EFL students. Using a quasi-experimental design, the research aims to determine whether the combined use of automated and human feedback can enhance students' writing scores and accuracy more effectively than teacher feedback alone. The study was conducted with 120 undergraduate EFL students, who were divided into an experimental group receiving combined feedback and a control group receiving only teacher feedback. Data were gathered through pre-test and post-test writing tasks, error analysis reports generated by the Criterion AWE tool, and student reflections on their feedback experience.

The results indicate that both feedback approaches led to improvements in students' overall writing scores, with no statistically significant difference between the groups in terms of overall performance. However, the experimental group showed a more pronounced reduction in grammatical and mechanical errors, suggesting that the integration of AWE with teacher feedback may be more effective in addressing these specific aspects of writing. Students in the experimental group benefited from the immediate, detailed, and accessible nature of automated feedback, which allowed them to correct errors while their ideas were still fresh. Additionally, participants reported that the promptness and specificity of AWE feedback motivated them to improve their writing. Despite these benefits, some students expressed concerns over the limited focus of automated feedback on content and occasional vague or incorrect advisory messages.

The findings underscore the potential of combining AWE systems with traditional feedback to alleviate the burden on teachers by allowing them to focus more on content and higher-order writing concerns. Moreover, the use of automated feedback fosters a more autonomous, learner-centered environment, encouraging students to self-regulate and engage actively in the writing process. The study's results align with existing literature on the advantages of technology-enhanced language learning, highlighting the importance of immediate feedback in facilitating learning and reducing recurrent errors. However, the study also acknowledges limitations, such as the non-randomized sampling, the specific context of English majors, and the absence of a delayed post-test to assess long-term effects. Future research should investigate the integration of AWE in diverse learner groups and instructional contexts, as well as explore its sustained impacts on writing proficiency.



Turning Off The Gaslight: The Best Of Scholarly Critical Thinking As A Response To GenAInt's Dystopias

Jonathan Poritz

various open education NGOs, including Open Washington, Open Oregon, Creative Commons, etc., Italy

It is said that under Mussolini, at least the trains in Italy ran on time. Similarly, there are probably some use cases of generative AI [genAInt] which make the world a better place -- some tools using large language models [LLMs] to support learners with disabilities come to mind -- but nearly all proposed uses are actually quite dystopian if one stops to look with a calm but critical eye. Moreover, turning down the gaslighting from big tech companies hoping to justify the hundreds of billions of dollars of investment they hope to continue receiving, it is should be clear that the fundamentals of genAInt are horrible for the global climate, for the lives of creators, for the fundamental ethics of the academy, for the gap between rich and poor, powerful and powerless. In fact, I would argue that in the context of education, one of the deepest wounds the current genAInt hype cycle is inflicting is to fundamentally devalue human knowledge, experience, and expertise: if an LLM can spit out a brand new calculus or art history textbook in an instant, what use are disciplinary experts ... and why would students waste their time building that expertise -- getting an education! -- if they can get the same outputs typing prompts into LLMs?

It is not a time to abandon the very idea of education and expertise when major global empires have democratically elected leaders who lied (and continue to lie) to the public about basic history, science, and economics. The hucksters of genAInt solemnly evoke the existential danger of runaway artificial general intelligence -- which honest computer scientists know is as distant a dream today as it was when Alan Turing founded their subject three quarters of a century ago -- while in fact it is the concentration of wealth, destruction of our climate, and attempted destruction of the idea of expertise which truly threaten our world.

Instead, it is a perfect time to think critically about what the science of LLMs tells us they really are and really can ever do, to love and use technology where it empowers humans and otherwise makes the world a better place as even the original Luddites did (contrary to the usual connotations of that word; as described in a recent book [Merchant: Blood in the Machine, 2023]) but to hate and fight against technology which steals, surveils, empowers the powerful and disenfranchises the powerless. Schools and universities can protect their communities by strong policies, and nations or transnational associations like the European Union can use strong laws to protect their information ecosystems from the enshittification which genAInt is rolling over the internet like a climate change-energize hurricane.



Dueling Discourses on AI in Higher Education: Critically Surfacing Tensions and Grounding Narratives in Context

James Brunton

Dublin City University, Ireland

Two things can be true at the same time: 1) AI tools represent a rapidly developing area of innovative and disruptive technological advancement that impacts on the operation of higher education programmes and institutions, which demands that this be given attention at every level of higher education institutions; and 2) an unsettling amount of the discourse in and around the use of AI tools in higher education is confused, contradictory, untethered from relevant context, and does more harm than good. This paper will chart tensions between the different ‘AI in higher education’ discourses and will ground them in different, relevant contexts, incorporating reflections on the author’s academic experiences and practices.

Discourses on ‘AI in higher education’ frequently occur with AI tools discussed as something new that students are using and staff need to learn about such that they can adapt their teaching and assessment practices to this one technology/set of technologies. These discussions are frequently divorced from any discussion of existing institutional digital competency frameworks and related, strategic, resourced capacity building for staff and students. Discourse also frequently acknowledges the myriad ethical issues with staff and student use of different AI tools, while in the same breath saying that use of AI tools is both unavoidable and desirable. This paper puts forward that any attempt to bring a technology/set of technologies into educational practices should be grounded in that ecosystem of evidence-based capacity building in digital competencies, and a framework for ethical use of technology in teaching and learning.

The ‘students are using it, so staff have to embrace it’ discourse is also highly aligned with the techno-deterministic and techno-optimistic narratives utilised by technology/edtech companies in the past, as detailed in critical digital pedagogy scholarship. For example, the recent positioning of edtech companies as potential saviours for institutions during the COVID-19 pandemic, offering technology products that could be adopted as part of a ‘pandemic pedagogy’. This paper puts forward that such narratives should be set in the context of critical scholarship on the potentially long-term consequences of engaging in this type of ‘magic buttonism’ rather than investing in staff capacity building and institutional structures to support staff and student engagement in digital teaching and learning.

There is a wealth of research and scholarship detailing that academic staff experience significant levels of stress, burnout, poor work-life balance with working during evenings and weekends being commonplace, and that this has been getting worse over time. Discourse on ‘AI in higher education’ frequently constructs a need for staff to individually upskill in a set of rapidly developing innovative and disruptive technologies, divorced from any acknowledgment of existing, problematic workload practices and related academic culture issues detailed in the literature. This paper puts forward that the practice of higher education institutions devolving ultimate responsibility for complex, systematic issues, such as ‘AI in higher education’ down to the level of the individual academics while simultaneously not addressing issues with academic work role definitions, workload, and underpinning academic culture will only serve to exacerbate the psychosocial hazards of academic work, with consequent, negative occupational health outcomes.

Finally, this paper will envision a more hope(punk)ful discourse on AI in higher education, based on a just digital transformation agenda, pedagogies of kindness and care, and open and inclusive educational practices.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: Education after the algorithm
Conference Software: ConfTool Pro 2.6.153
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany