Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
| Session | ||
Paper Session 4
| ||
| Presentations | ||
4:15pm - 4:40pm
Experiential Educational Ethics Activities in Undergraduate Education: Instructor Observations Rochester Institute of Technology, United States of America Although teaching ethical computing practices is essential for developing responsible technologists, ethics is still too frequently overlooked in computing education. As a consequence, students may graduate not only without the skills to identify and address ethical dilemmas, but also without an understanding of why ethical decision-making is essential, in the first place. To overcome this gap, we have created a set of easily adoptable, experiential ethics-focused [hidden] designed to systematically introduce students to core ethical concepts in computing and to emphasize the real-world consequences and importance of ethical computing practices. In the following paper, we report on instructor observations regarding the inclusion of these ethics-focused computing labs at a diverse set of categorically distinct partner institutions. The complete project materials are openly available on our website: [hidden] 4:40pm - 5:05pm
Reverse Engineering Student Misconceptions United States Military Academy, West Point, United States of America In this study, we analyze the effectiveness of generative AI in diagnosing student misconceptions in an undergraduate operating systems class. Notably, we ask students to assess whether an LLM's feedback correctly diagnosed what mistaken belief was the proximate cause of their errors. We tested 3 models: ChatGPT, Claude, and Gemini. Gemini was the most consistent in terms of perceived student correctness. ChatGPT received the highest student ratings. In a qualitative assessment, the LLMs were correct 42 percent of the time in diagnosing the students' misconceptions. In a quantitative assessment, 63.5 percent of LLM responses were perceived as exactly or almost exactly correct. We conclude that LLMs can provide valuable formative feedback but are not yet ready to be the sole source of insights. | ||