Conference Agenda

Session
PCC2-SS: Recent Advances in Point Cloud Coding 2 (G-PCC)
Time:
Thursday, 24/Sept/2020:
4:15pm - 5:15pm

Session Chair: Ioan Tabus
Session Chair: Sebastian Schwarz
Location: Virtual platform

Presentations
4:15pm - 4:30pm

Deep Learning-based Point Cloud Geometry Coding with Resolution Scalability

André Guarda1, Nuno Rodrigues2, Fernando Pereira1

1Instituto Superior Técnico, Instituto de Telecomunicações, Lisbon, Portugal; 2ESTG, Instituto Politécnico de Leiria, Instituto de Telecomunicações, Leiria, Portugal

Point clouds are a 3D visual representation format that has recently become fundamentally important for immersive and interactive multimedia applications. Considering the high number of points of practically relevant point clouds, and their increasing market demand, efficient point cloud coding has become a vital research topic. In addition, scalability is an important feature for point cloud coding, especially for real-time applications, where the fast and rate efficient access to a decoded point cloud is important; however, this issue is still rather unexplored in the literature. In this context, this paper proposes a novel deep learning-based point cloud geometry coding solution with resolution scalability via interlaced sub-sampling. As additional layers are decoded, the number of points in the reconstructed point cloud increases as well as the overall quality. Experimental results show that the proposed scalable point cloud geometry coding solution outperforms the recent MPEG Geometry-based Point Cloud Compression standard which is much less scalable.

Guarda-Deep Learning-based Point Cloud Geometry Coding with Resolution Scalability-112.pdf


4:30pm - 4:45pm
⭐ This paper has been nominated for the best paper award.

Improved Deep Point Cloud Geometry Compression

Maurice Quach, Giuseppe Valenzise, Frederic Dufaux

Université Paris-Saclay, CNRS, CentraleSupélec, Laboratoire des signaux et systèmes, 91190 Gif-sur-Yvette, France

Point clouds have been recognized as a crucial data structure for 3D content and are essential in a number of applications such as virtual and mixed reality, autonomous driving, cultural heritage, etc. In this paper, we propose a set of contributions to improve deep point cloud compression, i.e.: using a scale hyperprior model for entropy coding; employing deeper transforms; a different balancing weight in the focal loss; optimal thresholding for decoding; and sequential model training. In addition, we present an extensive ablation study on the impact of each of these factors, in order to provide a better understanding about why they improve RD performance. An optimal combination of the proposed improvements achieves BD-PSNR gains over G-PCC trisoup and octree of 5.51 (6.50) dB and 6.83 (5.85) dB, respectively, when using the point-to-point (point-to-plane) metric.

Quach-Improved Deep Point Cloud Geometry Compression-255.pdf


4:45pm - 5:00pm
⭐ This paper has been nominated for the best paper award.

Saliency Maps for Point Clouds

Victor Fabre Figueiredo1, Gustavo Luiz Sandri2, Ricardo Lopes de Queiroz1, Philip A. Chou3

1University of Brasilia, Brazil; 2Federal Institute of Brasilia, Brazil; 3Google Inc.

Algorithms for creating saliency maps are well established for images, even though there is no literature on such methods for point clouds. We use orthographic projections in 2D planes which are subject to well established saliency detection algorithms to create a 3D saliency map. The results of each saliency map are projected to the 3D voxels and the results of the many projections are used to generate a 3D saliency map. Simple compression tests were carried using soft ROI maps.

Figueiredo-Saliency Maps for Point Clouds-258.pdf


5:00pm - 5:15pm
⭐ This paper has been nominated for the best paper award.

Successive Refinement of Bounding Volumes for Point Cloud Coding

Ioan Tabus1, Emre Kaya1, Sebastian Schwarz2

1Tampere University, Finland; 2Nokia Technologies

The paper proposes a new lossy way of encoding the geometry of point clouds. The proposed scheme reconstructs the geometry from only the two depth maps associated to a single projection direction and then proposes a progressive reconstruction process using suitably defined anchor points. The reconstruction from the two depth images follows several primitives for analyzing and encoding, several of which are only optional. The resulting bitstream is embedded and can be truncated at various levels of reconstruction of the bounding volume.

The encoding tools for encoding the needed entities are extremely simple and can be combined flexibly. The scheme can also be combined with the G-PCC coding, for reconstructing in a lossless way the sparse point clouds. The experiments show improvement of the rate-distortion performance of the proposed method when combined with the G-PCC codec as compared to G-PCC codec alone.

Tabus-Successive Refinement of Bounding Volumes for Point Cloud Coding-274.pdf