Overview and details of the sessions of this conference. Please register as a participant for the conference (free!) and then Login in order to have access to downloads in the detailed view. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view.
MQA2-О: Multimedia Quality Assessment 2
8:30am - 8:45am
Study on viewing completion ratio of video streaming
In this paper, a model for optimizing the encoding of adaptive bitrate video streaming is investigated. To this aim, the relationship between quality, content duration and acceptability measured through the completion ratio are studied. This work is based on intensive subjective testing performed in laboratory environment and shows the importance of stimulus duration in acceptance studies. A model to predict the completion ratio of videos is provided and shows good accuracy. Using this model, it is shown how quality requirements can be derived based on targeted abandonment rate and content duration. This work finds application for the definition of coding condition by video streaming providers when preparing content to be broadcast on their platform allowing to maintain user's engagement.
8:45am - 9:00am
Distortion Specific Contrast Based No-Reference Quality Assessment of DIBR-Synthesized Views
1IIT Jammu, India; 2KLens GmbH, Germany
In the literature, many 3D-Synthesized Image Quality Assessment (IQA) algorithms are proposed, which are based on predicting the geometric and structural distortions present in the synthesized datasets. With the exponential growth of accurate inpainting algorithms, certain types of distortions, such as Black-holes, has become obsolete. Unfortunately, the existing IQA algorithms are mainly concentrating on efficiently identifying these black holes and subsequently predicting the perceptual quality of 3D synthesized views. The performance of these algorithms is quite weak in the recently proposed IETR dataset. Towards this end, we propose a new completely blind IQAalgorithm, which is based on the following key observations: 1. Distortions such as blurriness, blockiness (compression artifacts), and fast fading (object shifting) primarily affect the perceptual quality of 3D-synthesized views. 2. The perceptual characteristics of natural and synthetic synthesized views are quite different; distortions in natural views are perceptually more sensitive than the former. 3. Human Visual System’s (HVS) ability to access the perceptual quality of an image also depends on some other properties of the images, such as contrast. All these observations are integrated into the proposed algorithm named Distortion-Specific Contrast-Based (DSCB) IQA. Various experiments validate that the proposed DSCB IQA efficiently competes for with human perception and exhibits substantially better results (at least 17% gain in terms of PLCC) when compared to the existing IQAs.
9:00am - 9:15am
A Large-scale Evaluation of the bitstream-based video-quality model ITU-T P.1204.3 on Gaming Content
1Technische Universität Ilmenau, Germany; 2Technische Universität Berlin, Germany; 3Kingston University, London, UK
The streaming of gaming content, both passive and
interactive, has increased manifolds in recent years. Gaming
contents bring with them some peculiarities which are normally
not seen in traditional 2D videos, such as the artificial and
synthetic nature of contents or repetition of objects in a game. In
addition, the perception of gaming content by the user is different
from that of traditional 2D videos due to its pecularities and also
the fact that users may not often watch such content. Hence, it
becomes imperative to evaluate whether the existing video quality
models usually designed for traditional 2D videos are applicable
to gaming content. In this paper, we evaluate the applicability
of the recently standardized bitstream-based video-quality model
ITU-T P.1204.3 on gaming content. To analyze the performance
of this model, we used 4 different gaming datasets (3 publicly
available + 1 internal) not previously used for model training,
and compared it with the existing state-of-the-art models. We
found that the ITU P.1204.3 model out of the box performs
well on these unseen datasets, with an RMSE ranging between
0:380:45 on the 5-point absolute category rating and Pearson
Correlation between 0:85 0:93 across all the 4 databases. We
further propose a full-HD variant of the P.1204.3 model, since the
original model is trained and validated which targets a resolution
of 4K/UHD-1. A 50:50 split across all databases is used to train
and validate this variant so as to make sure that the proposed
model is applicable to various conditions.
9:15am - 9:30am
A Multi-Criteria Contrast Enhancement Evaluation Measure using Wavelet Decomposition
1L2TI, Universite Sorbonne Paris Nord, Villetaneuse, France; 2Norwegian Colour and Visual Computing Lab, NTNU, Gjøvik, Norway; 3The Islamia University of Bahawalpur, 63100, Pakistan
An effective contrast enhancement method should not only improve the perceptual quality of an image but should also avoid adding any artifacts or affecting naturalness of images. This makes Contrast Enhancement Evaluation (CEE) a challenging task in the sense that both the improvement in image quality and unwanted side-effects need to be checked for. Currently, there is no single CEE metric that works well for all kinds of enhancement criteria. In this paper, we propose a new Multi-Criteria CEE (MCCEE) measure which combines different metrics effectively to give a single quality score. In order to fully exploit the potential of these metrics, we have further proposed to apply them on the decomposed image using wavelet transform. This new metric has been tested on two natural image contrast enhancement databases as well as on medical Computed Tomography (CT) images. The results show a substantial improvement as compared to the existing evaluation metrics.