Conference Agenda

Overview and details of the sessions of this conference. Please register as a participant for the conference (free!) and then Login in order to have access to downloads in the detailed view. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view.

Please note that all times are shown in the time zone of the conference. The current conference time is: 22nd Oct 2020, 01:12:39am EEST

Session Overview
RIM-SS: Models and Representations for Immersive Multimedia
Wednesday, 23/Sept/2020:
5:25pm - 6:25pm

Session Chair: Martin Alain
Location: Virtual platform

External Resource: Session:
First presentation:
Second presentation:
Third presentation:
Fourth presentation:
Show help for 'Increase or decrease the abstract text size'
5:25pm - 5:40pm

Local Luminance Patterns for Point Cloud Quality Assessment

Rafael Diniz, Pedro Freitas, Mylene Farias

University of Brasilia, Brazil

In recent years, there has been an increase in the popularity of Point Clouds (PC) as the preferred data structure for representing 3D visual contents. Examples of PC applications range from 3D representations of small objects up to large maps. The advent of PC adoption triggered the development of new coding, transmission, and presentation methodologies. And, along with these, novel methods for evaluating the visual quality of PC contents. This paper presents a new objective full-reference visual quality metric for PC contents, which uses a proposed descriptor entitled Local Luminance Patterns (LLP). LLP extracts the statistics of the luminance information of reference and test PCs and compares their statistics to assess the perceived quality of the test PC. The proposed PC quality assessment method can be applied to both large and small scale PCs. Using publicly available PC quality datasets, we compared the proposed method with current state-of-the-art PC quality metrics, obtaining competing results.

5:40pm - 5:55pm


Fatma Hawary

INRIA, France

Equirectangular projection is commonly used to map 360° captures into planar representation, so that existent processing methods can be directly applied to such content.

Such format introduces stitching distortions that could impact the efficiency of further processing such as camera pose estimation, 3D point localization and depth estimation. Indeed, even if some algorithms, mainly feature descriptors, tend to remap the projected images into a sphere, important radial distortions remain existent in the processed data. In this paper, we propose to adapt the spherical model to the geometry of the 360° fish-eye camera, and avoid the stitching process. We consider the angular coordinates of feature points on the sphere for evaluation.

We assess the precision of different operations such as camera rotation angle estimation and 3D point depth calculation on spherical camera images.

Experimental results show that the proposed fish-eye adapted sphere mapping allows more stability in angle estimation, as well as in 3D point localization, compared to the one on projected and stitched contents.

5:55pm - 6:10pm

Multi-Plane Image Video Compression

Scott Janus, Jill Boyce, Sumit Bhatia, Jason Tanner, Atul Divekar, Penne Lee

Intel, United States of America

Multi-plane Images (MPI) is a new approach for storing volumetric content. MPI represents a 3D scene within a view frustum with typically 32 planes of texture and transparency information per camera. MPI literature to date has been focused on still images but applying MPI to video will require substantial compression in order to be viable for real world productions. In this paper, we describe several techniques for compressing MPI video sequences by reducing pixel rate while maintaining acceptable visual quality. We focus on using traditional video compression codecs such as HEVC. While certainly a new codec algorithm specifically tailored to MPI would likely achieve very good results, no such devices exist today that support this hypothetical MPI codec. By comparison, hundreds of millions of real-time HEVC decoders are present in laptops and TVs today.

6:10pm - 6:25pm

Viewport Margins for 360-Degree Immersive Video

Igor D.D. Curcio, Saba Ahsan

Nokia Technologies, Finland

360-degree video delivery is more and more widespread. New use cases are constantly emerging and make it a promising video technology for Extended Reality applications. Viewport Dependent Delivery (VDD) is an established technique used for saving network bit rate when transmitting omnidirectional video. One of the hardest challenges in VDD of 360-degree video is how to ensure that the video quality in the user’s viewport is always the highest possible, independent of the user’s head motion speed and span of motion. This paper introduces the concept of viewport margins. These can be understood as an extra high-quality spatial safety area around the user’s viewport. Viewport margins provide a better user experience for the receiver by reducing the Motion to High Quality Delay and the percentage of low-quality viewport seen by the user. We provide simulation results that show the advantage of using viewport margins for real-time low-delay VDD of 360-degree video.

Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: IEEE MMSP 2020
Conference Software - ConfTool Pro 2.6.135+CC
© 2001 - 2020 by Dr. H. Weinreich, Hamburg, Germany