Overview and details of the sessions of this conference. Please register as a participant for the conference (free!) and then Login in order to have access to downloads in the detailed view. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view.
Please note that all times are shown in the time zone of the conference. The current conference time is: 22nd Oct 2020, 01:25:41am EEST
IVP1-O: Image and Video Processing 1
First presentation: https://mmsp-virtual.org/presentation/oral/motion-jpeg-decoding-iterative-thresholding-and-motioncompensated-de%EF%AC%82ickering
Second presentaion: https://mmsp-virtual.org/presentation/oral/single-depth-map-superresolution-joint-nonlocal-and-local-modeling
Third presentation: https://mmsp-virtual.org/presentation/oral/noisebreaker-gradual-image-denoising-guided-noise-analysis
Fourth presentation: https://mmsp-virtual.org/presentation/oral/key-point-agnostic-frequencyselective-meshtogrid-image-resampling-using-spectral
8:30am - 8:45am
Motion JPEG Decoding via Iterative Thresholding and Motion-Compensated Deﬂickering
1ITMO University, Russian Federation; 2Shenzhen University, P.R. China
This paper studies the problem of decoding video sequences compressed by Motion JPEG (M-JPEG) at the best possible perceived video quality. We consider decoding of M-JPEG video as signal recovery from incomplete measurements known in compressive sensing. We take all quantized non-zero Discrete Cosine Transform (DCT) coefﬁcients as measurements and the remaining zero coefﬁcients as data that should be recovered. The output video is reconstructed via iterative thresholding algorithm, where Video Block Matching and 4-D ﬁltering (VBM4D) is used as thresholding operator. To reduce non-linearities in the measurements caused by the quantization in JPEG, we propose to apply spatio-temporal pre-ﬁltering before measurements calculation and recovery. Since temporal inconsistencies of the residual coding artifacts lead to strong ﬂickering in recovered video, we also propose to apply motion-compensated deﬂickering ﬁlter as a post-ﬁlter. Experimental results show that the proposed approach provides 0.44–0.51 dB average improvement in Peak Signal to Noise Ratio (PSNR), as well as lower ﬂickering level compared to the state-of-the-art method based on Coefﬁcient Graph Laplacians (COGL). We have also conducted a subjective comparison study, indicating that the proposed approach outperforms state-of-the-art methods in terms of subjective video quality.
8:45am - 9:00am
Single depth map super-resolution via joint non-local and local modeling
1University of Electronic Science and Technology of China, China, People's Republic of; 2Sichuan University
Recently, consumer depth cameras have gained significant popularity due to their affordable cost. However, the resolution and the quality of the depth map generated by these cameras are limited, which affects its application performance. In this paper, we propose a novel framework for the single depth map super-resolution via joint the local and non-local constraints simultaneously in the depth map. For the non-local constraint,
we use the group-based sparse representation to explore the nonlocal self-similarity of depth map. For the local constraint, we first estimate gradient images in different directions of the desired high-resolution (HR) depth map, and then build multi-directional
gradient guided regularizer using these estimated gradient images to characterize depth gradients with spatially-varying orientations. Finally, the two complementary regularizers are cast into a unified optimization framework to obtain the desired HR image.
Quantitative and qualitative evaluations compared with state-ofthe-art methods demonstrate that the proposed method achieves superior depth super-resolution performance.
9:00am - 9:15am
NoiseBreaker: Gradual Image Denoising Guided by Noise Analysis
1Univ. Rennes, INSA Rennes, IETR - UMR CNRS 6164, France; 2DGA-MI, Bruz, France
Fully supervised deep-learning based denoisers are currently the most performing image denoising solutions. However, they require clean reference images. When the target noise is complex, e.g. composed of an unknown mixture of primary noises with unknown intensity, fully supervised solutions are hindered by the difficulty to build a suited training set for the problem.
This paper proposes a gradual denoising strategy called NoiseBreaker that iteratively detects the dominating noise in an image, and removes it using a tailored denoiser. The method is shown to keep up with state of the art blind denoisers on mixture noises. Moreover, noise analysis is demonstrated to guide denoisers efficiently not only on noise type, but also on noise intensity. The method provides an insight on the nature of the encountered noise, and it makes it possible to extend an existing NoiseBreaker denoiser to support new noise profiles. This feature makes the method adaptive to varied denoising cases.
9:15am - 9:30am
⭐ This paper has been nominated for the best paper award.
Key Point Agnostic Frequency-Selective Mesh-to-Grid Image Resampling using Spectral Weighting
Friedrich-Alexander University, Germany
Many applications in image processing require re- sampling of arbitrarily located samples onto regular grid po- sitions. This is important in frame-rate up-conversion, super- resolution, and image warping among others. A state-of-the-art high quality model-based resampling technique is frequency- selective mesh-to-grid resampling which requires preestimation of key points. In this paper, we propose a new key point agnostic frequency-selective mesh-to-grid resampling is proposed that does not depend on pre-estimated key points. Hence, the number of data points that are included is reduced drastically and the run time decreases significantly. To compensate for the key points, a spectral weighting function is introduced that models the optical transfer function in order to favor low frequencies more than high ones. Thereby, resampling artefacts like ringing are supressed reliably and the resampling quality increases. On average, the new AFSMR is conceptually simpler and gains up to 1.2 dB in terms of PSNR compared to the original mesh-to-grid resampling while being approximately 14.5 times faster.
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: IEEE MMSP 2020
|Conference Software - ConfTool Pro 2.6.135+CC
© 2001 - 2020 by Dr. H. Weinreich, Hamburg, Germany