Conference Agenda

PCC1-SS: Recent Advances in Point Cloud Coding 1 (V-PCC)
Monday, 21/Sept/2020:
5:25pm - 6:25pm

Session Chair: Sebastian Schwarz
Session Chair: Ioan Tabus
Location: Virtual platform

5:25pm - 5:40pm

Skeleton-based motion estimation for Point Cloud Compression

Chao CAO, Christian TULVAN, Marius PREDA, Titus ZAHARIA

Télécom SudParis, France

With the rapid development of point cloud acquisition technologies, high-quality human-shape point clouds are more and more used in VR/AR applications and in general in 3D Graphics. To achieve near-realistic quality, such content usually contains an extremely high number of points (over 0.5 million points per 3D object per frame) and associated attributes (such as color). For this reason, disposing of efficient, dedicated 3D Point Cloud Compression (3DPCC) methods becomes mandatory. This requirement is even stronger in the case of dynamic content, where the coordinates and attributes of the 3D points are evolving over time. In this paper, we propose a novel skeleton-based 3DPCC approach, dedicated to the specific case of dynamic point clouds representing humanoid avatars. The method relies on a multi-view 2D human pose estimation of 3D dynamic point clouds. By using the DensePose neural network, we first extract the body parts from projected 2D images. The obtained 2D segmentation information is back-projected and aggregated into the 3D space. This procedure makes it possible to partition the 3D point cloud into a set of 3D body parts. For each part, a 3D affine transform is estimated between every two consecutive frames and used for 3D motion compensation. The proposed approach has been integrated into the Video-based Point Cloud Compression (V-PCC) test model of MPEG. Experimental results show that the proposed method, in the particular case of body motion with small amplitudes, outperforms the V-PCC test mode in the lossy inter-coding condition by up to 69% in terms of bitrate reduction in low bit rate conditions. Meanwhile, the proposed framework holds the potential of supporting various features such as regions of interests and level of details.

CAO-Skeleton-based motion estimation for Point Cloud Compression-167.pdf

5:40pm - 5:55pm

Surface Lightfield Support in Video-based Point Cloud Coding

Deepa Naik, Sebastian Schwarz, Vinod Kumar Malamal Vadakital, Kimmo Roimela

Nokia, Finland

Surface light-field (SLF) is a mapping of a set

of color vectors to a set of ray vectors that originate at a

point on a surface. It enables rendering photo-realistic view

points in extended reality applications. However, the amount of

data required to represent SLF is significantly more. Therefore,

storing and distributing SLFs requires an efficient compressed

representation. The Motion Pictures Experts Group (MPEG) has

an on-going standard activity for the compression of point clouds.

Until recently, this activity was targeting compression of single

texture information, but is now investigating view dependent

textures. In this paper, we propose methods to optimize coding of

view dependent color without compromising on the visual quality.

Our results show the optimizations provided in this paper reduce

coded HEVC bit rate by 36% for the all-Intra configuration and

48% for the random-access configuration, when compared to

coding all texture independently.

Naik-Surface Lightfield Support in Video-based Point Cloud Coding-184.pdf

5:55pm - 6:10pm

Mesh Coding Extensions to MPEG-I V-PCC

Esmaeil Faramarzi, Rajan Joshi, Madhukar Budagavi

Samsung, United States of America

Dynamic point clouds and meshes are used in a wide variety of applications such as gaming, visualization, medicine, and more recently AR/VR/MR. This paper presents two extensions of MPEG-I Video-based Point Cloud Compression (V-PCC) standard to support mesh coding. The extensions are based on Edgebreaker and TFAN mesh connectivity coding algorithms implemented in the Google Draco software and the MPEG SC3DMC software for mesh coding, respectively. Lossless results for the proposed frameworks on top of version 8.0 of the MPEG-I V-PCC test model (TMC2) are presented and compared with Draco for dense meshes.

Faramarzi-Mesh Coding Extensions to MPEG-I V-PCC-244.pdf

6:10pm - 6:25pm

V-PCC Component Synchronization for Point Cloud Reconstruction

Danillo Graziosi1, Ali Tabatabai1, Alexandre Zaghetto1, Vladyslav Zakharchenko2

1San Jose Research Lab, Sony Corporation of America; 2Standards and Industry department, Futurewei Technologies Inc.

For a V-PCC system to be able to reconstruct a single instance of the point cloud one V-PCC unit must be transferred to the 3D point cloud reconstruction module. It is however required that all the V-PCC components i.e. occupancy map, geometry, atlas and attribute to be temporally aligned. This, in principle, could pose a challenge since the temporal structures of the decoded sub-bitstreams are not coherent across V-PCC sub-bitstreams. In this paper we propose an output delay adjustment mechanism for the decoded V-PCC sub-bitstreams to provide synchronized V-PCC components input to the point cloud reconstruction module.

Graziosi-V-PCC Component Synchronization for Point Cloud Reconstruction-287.pdf