Conference Agenda

Overview and details of the sessions of this conference. Please register as a participant for the conference (free!) and then Login in order to have access to downloads in the detailed view. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view.

 
 
Session Overview
Session
MCN1-P: Multimedia Communications and Networking 1
Time:
Wednesday, 23/Sept/2020:
6:25pm - 6:40pm

Session Chair: Nicholas Mastronarde
Location: Virtual platform

Presentations
6:25pm - 6:30pm

Smart caching for live 360$^o$ video streaming in mobile networks

Pantelis Maniotis, Nikolaos Thomos

University of Essex, United Kingdom

Despite the advances of 5G systems, the delivery of 360$^o$ video content in mobile networks remains challenging because of the size of 360$^o$ video files. Recently, edge caching has been shown to bring large performance gains to 360$^o$ Video on Demand (VoD) delivery systems, however existing systems cannot be straightforwardly applied to live 360$^o$ video streaming. To address this issue, we investigate edge cache-assisted live 360$^o$ video streaming. As videos' and tiles' popularities vary with time, our framework employs a Long Short-Term Memory (LSTM) network to determine the optimal cache placement/evictions strategies that optimize the quality of the videos rendered by the users. To further enhance the delivered video quality, users located in the overlap of the coverage areas of multiple SBSs are allowed to receive their data from any of these SBSs. We evaluate and compare the performance of our method with that of state-of-the-art systems. The results show the superiority of the proposed method against its counterparts, and make clear the benefits of accurate tiles' popularity prediction by the LSTM networks and users association with multiple SBSs in terms of the delivered quality.

Maniotis-Smart caching for live 360$o$ video streaming in mobile networks-257.pdf


6:30pm - 6:35pm

RF-FSO Dual-Path UAV Network for High Fidelity Multi-Viewpoint Scalable 360 Degree Video Streaming

Mahmudur Rahman Khan1, Jacob Chakareski1, Sabyasachi Gupta2

1New Jersey Institute of Technology, United States of America; 2Southern Methodist University

We explore a novel RF-FSO dual-path UAV network for remote scene aerial scalable 360 degree video capture and streaming, to enable future virtual human teleportation. One UAV captures the 360 degree video viewpoint and constructs a scalable tiling representation of the data comprising a base layer and an enhancement layer. The base layer is sent by the UAV to a ground-based remote server using a direct RF link. The enhancement layer is relayed by the UAV to the server over a multi-hop path comprising directed UAV to UAV FSO links. The viewport-specific content from the two layers is then integrated at the server to construct high fidelity content to stream to a remote VR user. The dual-path connectivity ensures both reliability and high fidelity remote immersion. We formulate an optimization problem to maximize the delivered immersion fidelity which depends on the content capture rate, FSO and RF link rates, effective routing path selection, and fast UAV deployment. The problem is mixed integer programming and we formulate an optimization framework that captures the optimal solution at lower complexity. Our experimental results demonstrate an up to 6 dB gain in delivered immersion fidelity over a state-of-the-art method and for the first time enable 12K-120fps 360 degree video streaming at high fidelity.

Khan-RF-FSO Dual-Path UAV Network for High Fidelity Multi-Viewpoint Scalable 360 Degree Video Streaming-288.pdf


6:35pm - 6:40pm

Mobile-Edge Cooperative Multi-User 360-Degree Video Computing and Streaming

Jacob Chakareski1, Nicholas Mastronarde2

1NJIT, United States of America; 2University of Buffalo, United States of America

We investigate a novel communications system that integrates scalable multi-layer 360$^\circ$ video tiling, viewport-adaptive rate-distortion optimal resource allocation, and VR-centric edge computing and caching, to enable future high-quality untethered VR streaming. Our system comprises a collection of 5G small cells that can pool their communication, computing, and storage resources to collectively deliver scalable 360$^\circ$ video content to mobile VR clients at much higher quality. Our major contributions are rigorous design of multi-layer 360$^\circ$ tiling and related models of statistical user navigation, and analysis and optimization of edge-based multi-user VR streaming that integrates viewport adaptation and server cooperation. We also explore the possibility of network coded data operation and its implications for the analysis, optimization, and system performance we pursue here. We demonstrate considerable gains in delivered immersion fidelity, featuring much higher 360$^\circ$ viewport peak signal to noise ratio (PSNR) and VR video frame rates and spatial resolutions.

Chakareski-Mobile-Edge Cooperative Multi-User 360-Degree Video Computing and Streaming-289.pdf