Conference Agenda

Overview and details of the sessions of this conference. Please register as a participant for the conference (free!) and then Login in order to have access to downloads in the detailed view. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view.

Please note that all times are shown in the time zone of the conference. The current conference time is: 22nd Oct 2020, 01:25:41am EEST

Session Overview
IVP1-O: Image and Video Processing 1
Tuesday, 22/Sept/2020:
8:30am - 9:30am

Session Chair: Federica Battisti
Location: Virtual platform

External Resource: Session:
First presentation:
Second presentaion:
Third presentation:
Fourth presentation:
Show help for 'Increase or decrease the abstract text size'
8:30am - 8:45am

Motion JPEG Decoding via Iterative Thresholding and Motion-Compensated Deflickering

Evgeny Belyaev1, Linlin Bie2, Jari Korhonen2

1ITMO University, Russian Federation; 2Shenzhen University, P.R. China

This paper studies the problem of decoding video sequences compressed by Motion JPEG (M-JPEG) at the best possible perceived video quality. We consider decoding of M-JPEG video as signal recovery from incomplete measurements known in compressive sensing. We take all quantized non-zero Discrete Cosine Transform (DCT) coefficients as measurements and the remaining zero coefficients as data that should be recovered. The output video is reconstructed via iterative thresholding algorithm, where Video Block Matching and 4-D filtering (VBM4D) is used as thresholding operator. To reduce non-linearities in the measurements caused by the quantization in JPEG, we propose to apply spatio-temporal pre-filtering before measurements calculation and recovery. Since temporal inconsistencies of the residual coding artifacts lead to strong flickering in recovered video, we also propose to apply motion-compensated deflickering filter as a post-filter. Experimental results show that the proposed approach provides 0.44–0.51 dB average improvement in Peak Signal to Noise Ratio (PSNR), as well as lower flickering level compared to the state-of-the-art method based on Coefficient Graph Laplacians (COGL). We have also conducted a subjective comparison study, indicating that the proposed approach outperforms state-of-the-art methods in terms of subjective video quality.

8:45am - 9:00am

Single depth map super-resolution via joint non-local and local modeling

Yingying Zhang1, Chao Ren2, Honggang Chen2, Ce Zhu1

1University of Electronic Science and Technology of China, China, People's Republic of; 2Sichuan University

Recently, consumer depth cameras have gained significant popularity due to their affordable cost. However, the resolution and the quality of the depth map generated by these cameras are limited, which affects its application performance. In this paper, we propose a novel framework for the single depth map super-resolution via joint the local and non-local constraints simultaneously in the depth map. For the non-local constraint,

we use the group-based sparse representation to explore the nonlocal self-similarity of depth map. For the local constraint, we first estimate gradient images in different directions of the desired high-resolution (HR) depth map, and then build multi-directional

gradient guided regularizer using these estimated gradient images to characterize depth gradients with spatially-varying orientations. Finally, the two complementary regularizers are cast into a unified optimization framework to obtain the desired HR image.

Quantitative and qualitative evaluations compared with state-ofthe-art methods demonstrate that the proposed method achieves superior depth super-resolution performance.

9:00am - 9:15am

NoiseBreaker: Gradual Image Denoising Guided by Noise Analysis

Florian Lemarchand1, Thomas Findeli1, Erwan Nogues1,2, Maxime Pelcat1

1Univ. Rennes, INSA Rennes, IETR - UMR CNRS 6164, France; 2DGA-MI, Bruz, France

Fully supervised deep-learning based denoisers are currently the most performing image denoising solutions. However, they require clean reference images. When the target noise is complex, e.g. composed of an unknown mixture of primary noises with unknown intensity, fully supervised solutions are hindered by the difficulty to build a suited training set for the problem.

This paper proposes a gradual denoising strategy called NoiseBreaker that iteratively detects the dominating noise in an image, and removes it using a tailored denoiser. The method is shown to keep up with state of the art blind denoisers on mixture noises. Moreover, noise analysis is demonstrated to guide denoisers efficiently not only on noise type, but also on noise intensity. The method provides an insight on the nature of the encountered noise, and it makes it possible to extend an existing NoiseBreaker denoiser to support new noise profiles. This feature makes the method adaptive to varied denoising cases.

9:15am - 9:30am
⭐ This paper has been nominated for the best paper award.

Key Point Agnostic Frequency-Selective Mesh-to-Grid Image Resampling using Spectral Weighting

Viktoria Heimann, Nils Genser, André Kaup

Friedrich-Alexander University, Germany

Many applications in image processing require re- sampling of arbitrarily located samples onto regular grid po- sitions. This is important in frame-rate up-conversion, super- resolution, and image warping among others. A state-of-the-art high quality model-based resampling technique is frequency- selective mesh-to-grid resampling which requires preestimation of key points. In this paper, we propose a new key point agnostic frequency-selective mesh-to-grid resampling is proposed that does not depend on pre-estimated key points. Hence, the number of data points that are included is reduced drastically and the run time decreases significantly. To compensate for the key points, a spectral weighting function is introduced that models the optical transfer function in order to favor low frequencies more than high ones. Thereby, resampling artefacts like ringing are supressed reliably and the resampling quality increases. On average, the new AFSMR is conceptually simpler and gains up to 1.2 dB in terms of PSNR compared to the original mesh-to-grid resampling while being approximately 14.5 times faster.

Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: IEEE MMSP 2020
Conference Software - ConfTool Pro 2.6.135+CC
© 2001 - 2020 by Dr. H. Weinreich, Hamburg, Germany