Conference Agenda

Session
IVP3-P: Image and Video Processing 3
Time:
Thursday, 24/Sept/2020:
10:50am - 11:15am

Session Chair: Azeddine Beghdadi
Location: Virtual platform

Presentations
10:50am - 10:55am

Real-Time Frequency Selective Reconstruction through Register-Based Argmax Calculation

Andy Regensky, Simon Grosche, Jürgen Seiler, André Kaup

Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany

Frequency Selective Reconstruction (FSR) is a state-of-the-art algorithm for solving diverse image reconstruction tasks, where a subset of pixel values in the image is missing. However, it entails a high computational complexity due to its iterative, blockwise procedure to reconstruct the missing pixel values. Although the complexity of FSR can be considerably decreased by performing its computations in the frequency domain, the reconstruction procedure still takes multiple seconds up to multiple minutes depending on the parameterization. However, FSR has the potential for a massive parallelization greatly improving its reconstruction time. In this paper, we introduce a novel highly parallelized formulation of FSR adapted to the capabilities of modern GPUs and propose a considerably accelerated calculation of the inherent argmax calculation. Altogether, we achieve a 100-fold speed-up, which enables the usage of FSR for real-time applications.

Regensky-Real-Time Frequency Selective Reconstruction through Register-Based Argmax Calculation-106.pdf


10:55am - 11:00am

Object-Oriented Motion Estimation using Edge-Based Image Registration

Md. Asikuzzaman, Deepak Rajamohan, Mark R. Pickering

The University of New South Wales, Australia

Video data storage and transmission cost can be reduced by minimizing the temporally redundant information among frames using an appropriate motion-compensated prediction technique. In the current video coding standard, the neighbouring frames are exploited to predict the motion of the current frame using global motion estimation-based approaches. However, the global motion estimation of a frame may not produce the actual motion of individual objects in the frame as each of the objects in a frame usually has its own motion. In this paper, an edge-based motion estimation technique is presented that finds the motion of each object in the frame rather than finding the global motion of that frame. In the proposed method, edge position difference (EPD) similarity measure-based image registration between the two frames is applied to register each object in the frame. A superpixel search is then applied to segment the registered object. Finally, the proposed edge-based image registration technique and Demons algorithm are applied to predict the objects in the current frame. Our experimental analysis demonstrates that the proposed algorithm can estimate the motions of individual objects in the current frame accurately compared to the existing global motion estimation-based approaches.

Asikuzzaman-Object-Oriented Motion Estimation using Edge-Based Image Registration-188.pdf


11:00am - 11:05am
⭐ This paper has been nominated for the best paper award.

Convolution Autoencoder-Based Sparse Representation Wavelet for Image Classification

Tan-Sy Nguyen, Long H. Ngo, Marie Luong, Mounir Kaaniche, Azeddine Beghdadi

L2TI, Universite Sorbonne Paris Nord, France

In this paper, we propose an effective Convolutional Autoencoder (AE) model for Sparse Representation (SR) in the Wavelet Domain for Classification (SRWC). The proposed approach involves an autoencoder with a sparse latent layer for learning sparse codes of wavelet features. The estimated sparse codes are used for assigning classes to test samples using a residual-based probabilistic criterion. Intensive experiments carried out on various datasets revealed that the proposed method yields better classification accuracy while exhibiting a significant reduction in the number of network parameters, compared to several recent deep learning-based methods.

Nguyen-Convolution Autoencoder-Based Sparse Representation Wavelet-150.pdf


11:05am - 11:10am

Efficient Adaptive Inference leveraging Bag-of-Features-based Early Exits

Nikolaos Passalis1, Jenni Raitoharju2, Moncef Gabbouj3, Anastasios Tefas1

1Aristotle University of Thessaloniki, Greece; 2Programme for Environmental Information, Finnish Environment Institute, Jyväskylä, Finland; 3Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland

Early exits provide an effective way of implementing adaptive computational graphs over deep learning models. In this way it is possible to adapt them on-the-fly to the available computational resources or even to the difficulty of each input sample, reducing the energy and computational power requirements in many embedded and mobile applications. However, performing this kind of adaptive inference also comes with several challenges, since the difficulty of each sample must be estimated and the most appropriate early exit must be selected. It is worth noting that existing approaches often lead to highly unbalanced distributions over the selected early exits, reducing the efficiency of the adaptive inference process. At the same time, only a few resources can be devoted to the aforementioned process, in order to ensure that an adequate speedup will be obtained. The main contribution of this work is to provide an easy to use and tune adaptive inference approach for early exits that can overcome some of these limitations. In this way, the proposed method allows for a) obtaining a more balanced inference distribution among the early exits, b) relying on a single and interpretable hyper-parameter for tuning its behavior (ranging from faster inference to higher accuracy), and c) improving the performance of the networks (increasing the accuracy and reducing the time needed for inference). Indeed, the effectiveness of the proposed method over existing approaches is demonstrated using four different image datasets.

Passalis-Efficient Adaptive Inference leveraging Bag-of-Features-based Early Exits-118.pdf


11:10am - 11:15am

Graph-based Deep Learning Analysis and Instance Selection

Keisuke Nonaka1, Sarath Shekkizhar2, Antonio Ortega2

1KDDI Corporation; 2University of Southern California

While deep learning is a powerful tool for many applications, there has been only limited research about selection of data for training, i.e., instance selection, which enhances deep learning scalability by saving computational resources. This can be attributed in part to the difficulty of interpreting deep learning models. While some graph-based methods have been proposed to improve performance and interpret behavior of deep learning, the instance selection problem has not addressed from this graph perspective.

%based on a graph obtained by behavior of learning model has not been proposed.

In this paper, we analyze the behavior of deep learning outputs by using the K-nearest neighbor (KNN) graph construction. We observe that when a directed KNN graph is constructed, instead of the more conventional undirected KNN, a large number of instances become isolated nodes, i.e., they do not belong to the directed neighborhoods of any other nodes. Based on this we propose two new instance selection methods, based on maximizing and minimizing the instance distribution. In our experiments, the instance selected by the maximizing method shows better accuracy results than the conventional method. Furthermore, the results show that robustness of our method to changing CNNs settings and the analysis is reliable.

Nonaka-Graph-based Deep Learning Analysis and Instance Selection-273.pdf