Conference Agenda

Session
RIM-SS: Models and Representations for Immersive Multimedia
Time:
Wednesday, 23/Sept/2020:
5:25pm - 6:25pm

Session Chair: Martin Alain
Location: Virtual platform

Presentations
5:25pm - 5:40pm

Local Luminance Patterns for Point Cloud Quality Assessment

Rafael Diniz, Pedro Freitas, Mylene Farias

University of Brasilia, Brazil

In recent years, there has been an increase in the popularity of Point Clouds (PC) as the preferred data structure for representing 3D visual contents. Examples of PC applications range from 3D representations of small objects up to large maps. The advent of PC adoption triggered the development of new coding, transmission, and presentation methodologies. And, along with these, novel methods for evaluating the visual quality of PC contents. This paper presents a new objective full-reference visual quality metric for PC contents, which uses a proposed descriptor entitled Local Luminance Patterns (LLP). LLP extracts the statistics of the luminance information of reference and test PCs and compares their statistics to assess the perceived quality of the test PC. The proposed PC quality assessment method can be applied to both large and small scale PCs. Using publicly available PC quality datasets, we compared the proposed method with current state-of-the-art PC quality metrics, obtaining competing results.

Diniz-Local Luminance Patterns for Point Cloud Quality Assessment-149.pdf


5:40pm - 5:55pm

SPHERE MAPPING FOR FEATURE EXTRACTION FROM 360° FISH-EYE CAPTURES

Fatma Hawary

INRIA, France

Equirectangular projection is commonly used to map 360° captures into planar representation, so that existent processing methods can be directly applied to such content.

Such format introduces stitching distortions that could impact the efficiency of further processing such as camera pose estimation, 3D point localization and depth estimation. Indeed, even if some algorithms, mainly feature descriptors, tend to remap the projected images into a sphere, important radial distortions remain existent in the processed data. In this paper, we propose to adapt the spherical model to the geometry of the 360° fish-eye camera, and avoid the stitching process. We consider the angular coordinates of feature points on the sphere for evaluation.

We assess the precision of different operations such as camera rotation angle estimation and 3D point depth calculation on spherical camera images.

Experimental results show that the proposed fish-eye adapted sphere mapping allows more stability in angle estimation, as well as in 3D point localization, compared to the one on projected and stitched contents.

Hawary-SPHERE MAPPING FOR FEATURE EXTRACTION FROM 360° FISH-EYE CAPTURES-213.pdf


5:55pm - 6:10pm

Multi-Plane Image Video Compression

Scott Janus, Jill Boyce, Sumit Bhatia, Jason Tanner, Atul Divekar, Penne Lee

Intel, United States of America

Multi-plane Images (MPI) is a new approach for storing volumetric content. MPI represents a 3D scene within a view frustum with typically 32 planes of texture and transparency information per camera. MPI literature to date has been focused on still images but applying MPI to video will require substantial compression in order to be viable for real world productions. In this paper, we describe several techniques for compressing MPI video sequences by reducing pixel rate while maintaining acceptable visual quality. We focus on using traditional video compression codecs such as HEVC. While certainly a new codec algorithm specifically tailored to MPI would likely achieve very good results, no such devices exist today that support this hypothetical MPI codec. By comparison, hundreds of millions of real-time HEVC decoders are present in laptops and TVs today.

Janus-Multi-Plane Image Video Compression-271.pdf


6:10pm - 6:25pm

Viewport Margins for 360-Degree Immersive Video

Igor D.D. Curcio, Saba Ahsan

Nokia Technologies, Finland

360-degree video delivery is more and more widespread. New use cases are constantly emerging and make it a promising video technology for Extended Reality applications. Viewport Dependent Delivery (VDD) is an established technique used for saving network bit rate when transmitting omnidirectional video. One of the hardest challenges in VDD of 360-degree video is how to ensure that the video quality in the user’s viewport is always the highest possible, independent of the user’s head motion speed and span of motion. This paper introduces the concept of viewport margins. These can be understood as an extra high-quality spatial safety area around the user’s viewport. Viewport margins provide a better user experience for the receiver by reducing the Motion to High Quality Delay and the percentage of low-quality viewport seen by the user. We provide simulation results that show the advantage of using viewport margins for real-time low-delay VDD of 360-degree video.

Curcio-Viewport Margins for 360-Degree Immersive Video-256.pdf