Overview and details of the sessions of this conference. Please register as a participant for the conference (free!) and then Login in order to have access to downloads in the detailed view. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view.
IVP2-O: Image and Video Processing 2
9:40am - 9:55am
Controlled Feature Adjustment for Image Processing and Synthesis
Consejo Superior de Investigaciones Científicas (CSIC), Spain
Feature adjustment, understood as the set of techniques
aimed at modifying at will global features of given
signals, has cardinal importance for several signal processing
applications, such as enhancement, restoration, style transfer,
and synthesis. Despite of this, it has not yet been approached
from a general, theory-grounded, perspective. This work proposes
a new conceptual and practical methodology that we term
Controlled Feature Adjustment (CFA). CFA provides methods
for, given a set of parametric global features (scalar functions of
discrete signals), (1) constructing a related set of deterministically
decoupled features, and (2) adjusting these new features in a
controlled way, i.e., each one independently of the others. We
illustrate the application of CFA by devising a spectrally-based
hierarchically decoupled feature set and applying it to obtain
different types of image synthesis that are not achievable using
traditional (coupled) feature sets.
9:55am - 10:10am
⭐ This paper has been nominated for the best paper award.
Self-supervised Light Field View Synthesis Using Cycle Consistency
Trinity College Dublin, Ireland
High angular resolution is advantageous for practical applications of light fields.
In order to enhance the angular resolution of light fields, view synthesis methods can be utilized to generate dense intermediate views from sparse light field input.
Most successful view synthesis methods are learning-based approaches which require a large amount of training data paired with ground truth. However, collecting such large datasets for light fields is challenging compared to natural images or videos. To tackle this problem, we propose a self-supervised light field view synthesis framework with cycle consistency. The proposed method aims to transfer prior knowledge learned from high quality natural video datasets to the light field view synthesis task, which reduces the need for labeled light field data. A cycle consistency constraint is used to build bidirectional mapping enforcing the generated views to be consistent with the input views. Derived from this key concept, two loss functions, cycle loss and reconstruction loss, are used to fine-tune the pre-trained model of a state-of-the-art video interpolation method. The proposed method is evaluated on various datasets to validate its robustness, and results show it not only achieves competitive performance compared to supervised fine-tuning, but also outperforms state-of-the-art light field view synthesis methods, especially when generating multiple intermediate views. Besides, our generic light field view synthesis framework can be adopted to any pre-trained model for advanced video interpolation.
10:10am - 10:25am
High Frame-Rate Virtual View Synthesis Based on Low Frame-Rate Input
Poznan University of Technology, Poland
In the paper we investigated the methods of obtaining high-resolution, high frame-rate virtual views based on low frame-rate cameras for applications in high-performance multiview systems. We demonstrated how to set up synchronization for multiview acquisition systems to record required data and then how to process the data to create virtual views at a higher frame rate, while preserving high resolution of the views. We analyzed various ways to combine time frame interpolation with an alternative side-view synthesis technique which allows us to create a required high frame-rate video of a virtual viewpoint. The results prove that the proposed methods are capable of delivering the expected high-quality, high-resolution and high frame-rate virtual views.
10:25am - 10:40am
An occlusion compensation model for improving the reconstruction quality of light field
Guangxi Normal University, China, People's Republic of
Occlusion lack compensation (OLC) is a multiplexing gain optimization data acquisition and novel views rendering strategy for light field rendering (LFR). While the achieved OLC is much higher than previously thought possible, the improvement comes at the cost of requiring more scene information. This can capture more detailed scene information, including geometric information, texture information and depth information, by learning and training methods. In this paper, we develop an occlusion compensation (OCC) model based on restricted boltzmann machine (RBM) to compensate for lack scene information caused by occlusion. We show that occlusion will cause the lack of captured scene information, which will lead to the decline of view rendering quality. The OCC model can estimate and compensate the lack information of occlusion edge by learning. We present experimental results to demonstrate the performance of OCC model with analog training, verify our theoretical analysis, and extend our conclusions on optimal rendering quality of light field.