Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
12e. Human Factors
Time:
Wednesday, 18/Sept/2024:
1:45pm - 3:15pm

Session Chair: Armin Janß
Location: V 47.06

Session Topics:
Human Factors

Show help for 'Increase or decrease the abstract text size'
Presentations
1:45pm - 1:57pm
ID: 190
Conference Paper
Topics: Human Factors

Fantastic squeaks and where to find them: producing and analysing audible acoustics from leipäjuusto

Elina Nurkkala1, Craig Stuart Carlson2,1,3, Anu Hopia4, Michiel Postema1,3

1Department of Biomedical Technology, Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland; 2Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland; 3School of Electrical and Information Engineering, University of the Witwatersrand, Johannesburg, Braamfontein, South Africa; 4Functional Foods Forum, Faculty of Medicine, University of Turku, Turku, Finland

Chewing not only converts food chunks to digestible proportions, it also conveys audible acoustics resulting in a perception on the type and condition of the food being eaten. As biomedical engineers, we may want to reproduce the same eating experience for those who cannot chew or for those who have allergic reactions to some foods. But to understand this psychoacoustic phenomenon better, it is crucial to understand what produces the sound of specific foods. The purpose of this paper is to present a straightforward methodology to produce audible acoustics from a notoriously loud Finnish delicacy and analyse the sound produced. One hundred samples of leipäjuusto and one hundred samples of Gouda cheese for controls were subjected to shear between a bamboo board and a wetted blade. All leipäjuusto samples and none of the Gouda cheese samples produced audible squeaks. A 0.1-s delay between blade displacement and sound production was observed. We attribute this delay to the buildup to release. The frequency spectra from pushing and pulling movements were observed to have only negligible differences. This indicated that the internal structure between events did not change. Therefore, the hypothesis that a disruptive event underlies the squeaking process is less plausible.



1:57pm - 2:09pm
ID: 107
Conference Paper
Topics: Human Factors

Advancement and evaluation of a medical organization device

Ferdinand Langer1, Korinna Welte1, Jette Widmer1, Peter Schmid1, Thomas Maier1, Michael Trick2

1Institute for Engineering Design and Industrial Design, Industrial Design Engineering, University of Stuttgart; 2Clinic for Anaesthesiology, Surgical Intensive Care Medicine, Emergency Medicine, Pain Therapy and Palliative Medicine, District Hospitals Reutlingen

In anaesthesia and intensive care medicine, managing the multitude of signal and substance lines connected to patients presents challenges, risking dislocations, incorrect treatment, and increased workload for healthcare professionals. Existing solutions for signal and substance line management have seen limited acceptance, lacking universality and flexibility. Addressing these challanges, the “intelligent Medical Organization Device” (iMOD) was developed as a modular tool. iMOD, evaluated through simulator training, demonstrated positive results in intuitiveness and acceptance. Feedback highlighted areas for improvement, focusing on line insertion and fixation strength. The organization device may be a component in reducing treatment time and costs, increasing patient safety and improving the satisfaction of patients and healthcare professionals.



2:09pm - 2:21pm
ID: 228
Conference Paper
Topics: Robotics and Society

Speech-Controlled Robot enabling Cognitive Training and Stimulation in Dementia Prevention for Severely Disabled People

Jonas Schewior, Roman Grefen, Rodolfo Verde, Alina Ergardt, Ying Zhao, Walter H. Kullmann

Center for Robotics (CERI), Technische Hochschule Würzburg-Schweinfurt (THWS), Germany

Introduction

The current medical S3-guideline on dementia recommends the use of cognitive training and stimulation, e.g. through shared games, for mild cognitive impairment and mild dementia. The use of cobots, which allow direct human-robot collaboration, enables people with paralyzed upper limbs to actively participate in social activities. This research presents a novel approach through the development of a voice-controlled board game specifically tailored to the inclusion needs of people with severe disabilities.

Methods

The human voice commands recorded via USB microphones are digitally filtered. Speech recognition of the control commands is performed using a Convolutional Neural Network (CNN) based on the VGG-16 architecture. The robot´s activity is controlled utilizing the ROS 2 robot operating system.

Results

A portable table-based complete system with a 3D-printed Tic-Tac-Toe playing field and robot assistant for severely disabled paralyzed people has been developed. The robot control system employs a pick-and-place mechanism, seamlessly integrating with speech recognition to enhance gameplay interaction. The CNN model achieves an impressive accuracy rate of 97%, ensuring reliable speech recognition performance throughout gameplay.

Conclusion

The targeted integration of robotic technologies and artificial intelligence opens up new avenues in the prevention of mental illness, care support and inclusion of older and disabled people.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: BMT 2024
Conference Software: ConfTool Pro 2.8.105+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany