Conference Agenda

Overview and details of the sessions of this conference. Please register as a participant for the conference (free!) and then Login in order to have access to downloads in the detailed view. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view.

Please note that all times are shown in the time zone of the conference. The current conference time is: 22nd Oct 2020, 12:35:38am EEST

Session Overview
MEA-P: Multimedia Emerging Applications
Wednesday, 23/Sept/2020:
11:10am - 11:30am

Session Chair: Ali C. Begen
Location: Virtual platform

External Resource: Session:
First presentation:
Second presentation:
Third presentation:
Fourth presentation:
Show help for 'Increase or decrease the abstract text size'
11:10am - 11:15am

RoSTAR: ROS-based Telerobotic Control via Augmented Reality

Xue Er Chung, Yuansong Qiao, Niall Murray

Athlone Institute of Technology, Ireland

Abstract— Despite the advancement of Augmented Reality (AR), the process of creating meaningful and intuitive human-robot interaction (HRI) interfaces with such technology is still extremely challenging. The cost of developing an AR robotic application are exceptionally high due to the software licensing constraints and real-robot experiments. To mitigate these limitations, we propose RoSTAR, a novel and open source HRI system based on Robot Operating System (ROS) and Augmented Reality (AR). RoSTAR is developed using ROS to simulate a 6 Degree of Freedom (DOF) robotic arm and is capable of exporting the robot arm model directly to Unity. An AR Head Mounted Display (HMD) is deployed for delivering user interaction and instructions to a ROS powered robotic arm. This system has the potential to be used for different process tasks, such as robotic gluing, dispensing and arc welding. Quality of Experience (QoE) evaluation at the user side will be carried out in the future to understand and enhance end user satisfaction.

11:15am - 11:20am

Evaluation of Different Task Distributions for Edge Cloud-based Collaborative Visual SLAM

Sebastian Eger1, Rastin Pries2, Eckehard Steinbach1

1Technical University of Munich, Germany; 2Nokia Bell Labs, Munich, Germany

In recent years, a variety of visual SLAM (Simulta- neous Localization and Mapping) systems have been proposed. These systems allow camera-equipped agents to create a map of the environment and determine their position within this map, even without an available GPS signal. Visual SLAM algorithms differ mainly in the way the image information is processed and whether the resulting map is represented as a dense point cloud or with sparse feature points. However, most systems have in common that still a high computational effort is necessary to create an accurate, correct and up-to-date pose and map. This is a challenge for smaller mobile agents with limited power and computing resources.

In this paper, we investigate how the processing steps of a state- of-the-art feature-based visual SLAM system can be distributed among a mobile agent and an edge-cloud server. Depending on the specification of the agent, it can run the complete system locally, offload only the tracking and optimization part, or run nearly all processing steps on the server. For this purpose, the individual processing steps and their resulting data formats are examined and methods are presented how they can be efficiently transmitted to the server. Our experimental evaluation shows that the CPU load can be reduced for all task distributions which offload part of the pipeline to the server. For agents with low computing power, the processing time for the pose estimation can even be reduced. In addition, the higher computing power of the server allows to increase the frame rate and accuracy for pose estimation.

11:20am - 11:25am

Detection of Gait Abnormalities caused by Neurological Disorders

Daksh Goyal1, Koteswar Rao Jerripothula2, Ankush Mittal3

1NIT Surathkal, India; 2IIIT Delhi, India; 3Raman Classes, India

In this paper, we leverage gait to potentially detect some of the important neurological disorders, namely Parkinson's disease, Diplegia, Hemiplegia, and Huntington's Chorea. Persons with these neurological disorders often have a very abnormal gait, which motivates us to target gait for their potential detection. Some of the abnormalities involve the circumduction of legs, forward-bending, involuntary movements, etc. To detect such abnormalities in gait, we develop gait features from the key-points of the human pose, namely shoulders, elbows, hips, knees, ankles, etc. To evaluate the effectiveness of our gait features in detecting the abnormalities related to these diseases, we build a synthetic video dataset of persons mimicking the gait of persons with such disorders, considering the difficulty in finding a sufficient number of people with these disorders. We name it \textit{NeuroSynGait} video dataset. Experiments demonstrated that our gait features were indeed successful in detecting these abnormalities.

11:25am - 11:30am

The Suitability of Texture Vibrations Based on Visually Perceived Virtual Textures in Bimodal and Trimodal Conditions

ugur alican alma, ercan altinsoy

Technische Universität Dresden, Germany

In this study, suitability of recorded and simplified

texture vibrations are evaluated according to visual textures

displayed on a screen. The tested vibrations are 1) recorded

vibration, 2) single sinusoids, and 3) band-limited white noise

which were used in the previous work. The aim of this study is

to assess the congruence between the vibrotactile feedback and

the texture images with the absence and the presence of auditory

feedback. Two types of auditory feedback (touch-produced and

synthesized sounds) were used for the trimodal test, and they were

tested in different loudness levels. Therefore, the most plausible

combination of vibrotactile and audio stimuli when exploring the

visual textures can be determined. Based on the psychophysical

test results, the similarity ratings of the texture vibrations were

not concluded significantly different from each other in bimodal

condition as opposed to the former study in which textures

were haptically explored. In the trimodal judgments, synthesized

sound influenced the similarity ratings significantly while touch-produced

sound did not affect the perceived similarity.

Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: IEEE MMSP 2020
Conference Software - ConfTool Pro 2.6.135+CC
© 2001 - 2020 by Dr. H. Weinreich, Hamburg, Germany