Overview and details of the sessions of this conference. Please register as a participant for the conference (free!) and then Login in order to have access to downloads in the detailed view. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view.
PCC1-SS: Recent Advances in Point Cloud Coding 1 (V-PCC)
5:25pm - 5:40pm
Skeleton-based motion estimation for Point Cloud Compression
Télécom SudParis, France
With the rapid development of point cloud acquisition technologies, high-quality human-shape point clouds are more and more used in VR/AR applications and in general in 3D Graphics. To achieve near-realistic quality, such content usually contains an extremely high number of points (over 0.5 million points per 3D object per frame) and associated attributes (such as color). For this reason, disposing of efficient, dedicated 3D Point Cloud Compression (3DPCC) methods becomes mandatory. This requirement is even stronger in the case of dynamic content, where the coordinates and attributes of the 3D points are evolving over time. In this paper, we propose a novel skeleton-based 3DPCC approach, dedicated to the specific case of dynamic point clouds representing humanoid avatars. The method relies on a multi-view 2D human pose estimation of 3D dynamic point clouds. By using the DensePose neural network, we first extract the body parts from projected 2D images. The obtained 2D segmentation information is back-projected and aggregated into the 3D space. This procedure makes it possible to partition the 3D point cloud into a set of 3D body parts. For each part, a 3D affine transform is estimated between every two consecutive frames and used for 3D motion compensation. The proposed approach has been integrated into the Video-based Point Cloud Compression (V-PCC) test model of MPEG. Experimental results show that the proposed method, in the particular case of body motion with small amplitudes, outperforms the V-PCC test mode in the lossy inter-coding condition by up to 69% in terms of bitrate reduction in low bit rate conditions. Meanwhile, the proposed framework holds the potential of supporting various features such as regions of interests and level of details.
5:40pm - 5:55pm
Surface Lightfield Support in Video-based Point Cloud Coding
Surface light-field (SLF) is a mapping of a set
of color vectors to a set of ray vectors that originate at a
point on a surface. It enables rendering photo-realistic view
points in extended reality applications. However, the amount of
data required to represent SLF is significantly more. Therefore,
storing and distributing SLFs requires an efficient compressed
representation. The Motion Pictures Experts Group (MPEG) has
an on-going standard activity for the compression of point clouds.
Until recently, this activity was targeting compression of single
texture information, but is now investigating view dependent
textures. In this paper, we propose methods to optimize coding of
view dependent color without compromising on the visual quality.
Our results show the optimizations provided in this paper reduce
coded HEVC bit rate by 36% for the all-Intra configuration and
48% for the random-access configuration, when compared to
coding all texture independently.
5:55pm - 6:10pm
Mesh Coding Extensions to MPEG-I V-PCC
Samsung, United States of America
Dynamic point clouds and meshes are used in a wide variety of applications such as gaming, visualization, medicine, and more recently AR/VR/MR. This paper presents two extensions of MPEG-I Video-based Point Cloud Compression (V-PCC) standard to support mesh coding. The extensions are based on Edgebreaker and TFAN mesh connectivity coding algorithms implemented in the Google Draco software and the MPEG SC3DMC software for mesh coding, respectively. Lossless results for the proposed frameworks on top of version 8.0 of the MPEG-I V-PCC test model (TMC2) are presented and compared with Draco for dense meshes.
6:10pm - 6:25pm
V-PCC Component Synchronization for Point Cloud Reconstruction
1San Jose Research Lab, Sony Corporation of America; 2Standards and Industry department, Futurewei Technologies Inc.
For a V-PCC system to be able to reconstruct a single instance of the point cloud one V-PCC unit must be transferred to the 3D point cloud reconstruction module. It is however required that all the V-PCC components i.e. occupancy map, geometry, atlas and attribute to be temporally aligned. This, in principle, could pose a challenge since the temporal structures of the decoded sub-bitstreams are not coherent across V-PCC sub-bitstreams. In this paper we propose an output delay adjustment mechanism for the decoded V-PCC sub-bitstreams to provide synchronized V-PCC components input to the point cloud reconstruction module.