Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
| Session | ||
Paper Session 7
| ||
| Presentations | ||
9:50am - 10:15am
Eye-Assist Navigation System Quinnipiac University, United States of America Eye-Assist is an AI visual navigation system that converts camera input into concise, context-aware audio feedback for people with visual impairments. It fills gaps in current tools by combining real-time object detection, distance measurement, and an advanced priority algorithm so users can move safely, read text, and understand their surroundings. Eye-Assist uses a YOLOv8-based TensorFlow Lite model trained on custom day and night datasets, an Intel RealSense D435i depth camera for distance and motion cues, and a scoring algorithm that ranks nearby objects before generating spoken guidance. The demo will showcase real-time navigation alerts, a Read Mode for signs and documents using on-device OCR, and an Explain Surroundings feature triggered by voice commands, running on Android and in a Raspberry Pi 5 hybrid setup. 10:15am - 10:40am
Detection of Spinning Behavior with a Known Solution 1Emmanuel College, Boston, MA, United States of America; 2Codio, Inc., New York, NY, United States of America In classroom and online learning environments, identifying which students need help at any moment is challenging. Students often enter a state of ``spinning,'' continuing to work without making progress, and would benefit from timely intervention. We are developing a real-time system to detect spinning using behavioral patterns from students' programming editors. We collected fine-grained, often keystroke-level data from a Massively Open Online Course (MOOC) programming environment. In the first phase, we focus on assignments with known correct solutions, developing tools to measure students' distance from the goal using Levenshtein and AST edit distances, revealing proximity to or struggle toward the correct answer. By segmenting work into active sessions, we map progress over time across 28,000 students and 70 exercises, revealing improvement, divergence, and sustained effort without progress. We find that spinning often involves ups and downs rather than stagnation. Behavioral features extracted from these episodes will train a machine learning model in Phase II to detect spinning when solutions are unknown, enabling smarter, more responsive learning tools for online and classroom orchestration. We present our analytical approach, findings on student behavior patterns, and hypotheses for future work. | ||
