ut ut

Laboratory for Image & Video Engineering

Welcome to the LIVE-Meta Rendered Human Avatar Video Quality Assessment Database

LIVE-Meta Rendered Human Avatar Video Quality Assessment Database

Introduction

We study the visual quality judgments of human subjects on digital human avatars (sometimes referred to as ``holograms" in the parlance of virtual reality [VR] and augmented reality [AR] systems) that have been subjected to distortions. We also study the ability of video quality models to predict human judgments. As streaming human avatar videos in VR or AR become increasingly common, the need for more advanced human avatar video compression protocols will be required to address the tradeoffs between faithfully transmitting high-quality visual representations while adjusting to changeable bandwidth scenarios. During transmission over the internet, the perceived quality of compressed human avatar videos can be severely impaired by visual artifacts. To optimize trade-offs between perceptual quality and data volume in practical workflows, video quality assessment (VQA) models are essential tools. However, there are very few VQA algorithms developed specifically to analyze human body avatar videos, due, at least in part, to the dearth of appropriate and comprehensive datasets of adequate size. Towards filling this gap, we introduce the LIVE-Meta Rendered Human Avatar VQA Database, which contains 720 human avatar videos processed using 20 different combinations of encoding parameters, labeled by corresponding human perceptual quality judgments that were collected in six degrees of freedom VR headsets. To demonstrate the usefulness of this new and unique video resource, we use it to study and compare the performances of a variety of state-of-the-art Full Reference and No Reference video quality prediction models, including a new model called HoloQA.

Sample frames of (a) (b) standing and (c) (d) sitting human avatar videos from the LIVE-Meta Rendered Human Avatar VQA Database.

Download

Although we cannot make the proprietary Metastage videos freely available, other users may also purchase them. To facilitate such efforts, we are also making the metadata of the database publicly available to the research community. If you use this metadata in your research, we kindly ask that you to cite our paper and website listed below:

  • Y. C. Chen, A. Saha, A. Chapiro, C. Häne, J. C. Bazin, B. Qiu, S. Zanetti, I. Katsavounidis, A. C. Bovik, "Subjective and Objective Quality Assessment of Rendered Human Avatar Videos in Virtual Reality", IEEE Transactions on Image Processing 2024 [IEEE Xplore] [Arxiv]

Download Link Here! Please fill the Google Form to get access to the metadata of this database.

Database Description

The LIVE-Meta Rendered Human Avatar VQA database contains 720 videos derived from 36 source sequences of dynamic human avatar videos, rendered with varying degrees of spatial and temporal distortions, which were viewed and quality rated by 78 human subjects in an immersive 6DoF VR environment. To demonstrate the value of the new subjective dataset, we also evaluated the performances of a variety of state-of-the-art VQA models on it. We also describe new holographic video quality predictors of our own design, and test and compare them on the new dataset.

Investigators

The investigators in this research are:

Copyright Notice

-----------COPYRIGHT NOTICE STARTS WITH THIS LINE------------
Copyright (c) 2024 The University of Texas at Austin
All rights reserved.

Permission is hereby granted, without written agreement and without license or royalty fees, to use, copy, modify, and distribute this database (the videos, the results and the source files) and its documentation for any purpose, provided that the copyright notice in its entirety appear in all copies of this database, and the original source of this database, Laboratory for Image and Video Engineering (LIVE, http://live.ece.utexas.edu) at the University of Texas at Austin (UT Austin, http://www.utexas.edu ), is acknowledged in any publication that reports research using this database.

The following paper/website are to be cited in the bibliography whenever the database is used as:

  • Y. C. Chen, A. Saha, A. Chapiro, C. Häne, J.C. Bazin, B. Qiu, S. Zanetti, I. Katsavounidis, A. C. Bovik, "Subjective and Objective Quality Assessment of Rendered Human Avatar Videos in Virtual Reality", IEEE Transactions on Image Processing 2024 [IEEE Xplore] [Arxiv]
  • Y. C. Chen, A. Saha, A. Chapiro, C. Häne, J.C. Bazin, B. Qiu, S. Zanetti, I. Katsavounidis, A. C. Bovik, "LIVE-Meta Rendered Human Avatar Video Quality Assessment Database," Online: https://live.ece.utexas.edu/research/LIVE-Meta-rendered-human-avatar/index.html, 2024.

IN NO EVENT SHALL THE UNIVERSITY OF TEXAS AT AUSTIN BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OF THIS DATABASE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF TEXAS AT AUSTIN HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

THE UNIVERSITY OF TEXAS AT AUSTIN SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE DATABASE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE UNIVERSITY OF TEXAS AT AUSTIN HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.

-----------COPYRIGHT NOTICE ENDS WITH THIS LINE------------

Back to Quality Assessment Research page