ut ut

Laboratory for Image & Video Engineering

No Reference Image and Video Quality Assessment

Please go here to download our quality assessment databases and for free software releases of our quality assessment algorithms.


Objective quality assessment is a very complicated task, and even full-reference QA methods have had only limited success in making accurate quality predictions. Researchers therefore tend to break up the problem of NR QA into smaller, domain-specific problems by targeting a limited class of artifacts --- distortion-specific IQA. The most common being the blocking-artifact, which is usually the result of block-based compression algorithms running at low bit rates. At LIVE we have conducted research into NR QA for blocking distortion as well as pioneering research into NR measurement of distortion introduced by Wavelet based compression algorithms based on Natural Scene Statistics modeling.

Recently, we have tackled the distortion-agnostic no-reference/blind IQA problem, i.e., we have designed algorithms that are capable of assessing the quality of an image without need for a reference and without knowledge of the distortion that affects the image.


We propose the "Video BLIINDS" blind video quality evaluation approach that is non-distortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine transform (DCT) domain, and on a model that characterizes the type of motion occurring in the scenes, to predict video quality. The video quality assessment (VQA) algorithm does not require the presence of a pristine video to compare against in order to predict a quality score. The contributions of this work are three-fold.

1) We propose a spatio-temporal natural scene statistics (NSS) model for videos.
2) We propose a motion model that quantifies motion coherency in video scenes.
3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality.

The proposed algorithm, called Video BLIINDS, is tested on the LIVE VQA Database. We demonstrate that its performance approaches the performance of the top performing reduced and full reference algorithms.

Relevant Publications:

1.M.A. Saad and A.C Bovik, “ Blind Quality Assessment of Videos Using a Model of Natural Scene Statistics and Motion Coherency ”, Asilomar Conference on Signals, Systems, and Computers, November 2012.

Naturalness Image Quality Evaluator (NIQE)

Natural Image Quality Evaluator (NIQE) blind image quality assessment (IQA) is a completely blind image quality analyzer that only makes use of measurable deviations from statistical regularities observed in natural images, without training on human-rated distorted images, and, indeed without any exposure to distorted images. However, all current state-of-the-art general purpose no reference (NR) IQA algorithms require knowledge about anticipated distortions in the form of training examples and corresponding human opinion scores.

It is based on the construction of a quality aware collection of statistical features based on a simple and successful space domain natural scene statistic (NSS) model. These features are derived from a corpus of natural, undistorted images. Experimental results show that the new index delivers performance comparable to top performing NR IQA models that require training on large databases of human opinions of distorted images.

Relevant Publications:

1.A. Mittal, R. Soundararajan and A. C. Bovik, “ Making a Completely Blind Image Quality Analyzer ”, IEEE Signal processing Letters, pp. 209-212, vol. 22, no. 3, March 2013.

Blind/Referenceless Image Spatial QUality Evaluator (BRISQUE)

Blind/Referenceless Image Spatial QUality Evaluator (BRISQUE) is a natural scene statistic (NSS)-based distortion-generic blind/no-reference (NR) image quality assessment (IQA) model which operates in the spatial domain. It does not compute distortion specific features such as ringing, blur or blocking, but instead uses scene statistics of locally normalized luminance coefficients to quantify possible losses of ‘naturalness’ in the image due to the presence of distortions, thereby leading to a holistic measure of quality.

The underlying features used derive from the empirical distribution of locally normalized luminances and products of locally normalized luminances under a spatial natural scene statistic model. No transformation to another coordinate frame (DCT, wavelet, etc) is required, distinguishing it from prior no reference IQA approaches. Despite its simplicity, we are able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) and highly competitive to all present-day distortion-generic NR IQA algorithms. BRISQUE has very low computational complexity, making it well suited for real time applications. BRISQUE features may be used for distortion-identification as well.

To illustrate a new practical application of BRISQUE, we describe how a non-blind image denoising algorithm can be augmented with BRISQUE in order to perform blind image denoising. Results show that BRISQUE augmentation leads to performance improvements over the state-of-the-art.

Relevant Publications:

1.A. Mittal, A. K. Moorthy and A. C. Bovik, “ No-Reference Image Quality Assessment in the Spatial Domain ”, IEEE Transactions on ImageProcessing, 2012 (to appear).

2.A. Mittal, A. K. Moorthy and A. C. Bovik, “ Referenceless Image Spatial Quality Evaluation Engine ''. 45th Asilomar Conference on Signals, Systems and Computers. November 2011.

Distortion Identification-based Image Verity and INtegrity Evalutation (DIIVINE)

DIIVINE is a distortion-agnostic approach to blind IQA that utilizes concepts from natural scene statistics (NSS) to not only quantify the distortion and hence the quality of the image, but also qualify the distortion type afflicting the image. The Distortion Identification-based Image Verity and INtegrity Evaluation (DIIVINE) index utilizes a 2-stage framework for blind IQA that first identifies the distortion afflicting the image and then performs distortion-specific quality assessment.

Our computational theory for distortion-agnostic blind IQA is based on the regularity of natural scene statistics (NSS); for example, it is known that the power spectrum of natural scenes fall-off as (approximately) 1/f^b , where f is frequency. NSS models for natural images seek to capture and describe the statistical relationships that are common across natural (undistorted) images. Our hypothesis is that, the presence of distortion in natural images alters the natural statistical properties thereby rendering the image ‘un-natural’. NR IQA can then be accomplished by quantify- ing this ‘un-naturalness’ and relating it to perceived quality.

The Distortion Identification-based Image Verity and INtegrity Evaluation (DIIVINE) – divines the quality of an image without any need for a reference or the benefit of distortion models, with such precision that its performance is statistically indistinguishable from popular FR algorithms such as the structural similarity index (SSIM). The DIIVINE approach is distortion-agnostic, since it does not compute distortion-specific indicators of quality, but utilizes an NSS-based approach to qualify as well as quantify the distortion afflicting the image. The approach is modular, in that it can easily be extended beyond the pool of distortions considered here.

Relevant Publications:

1. A. K. Moorthy and A. C. Bovik, `` Blind Image Quality Assessment: From Scene Statistics to Perceptual Quality '', IEEE Transactions Image Processing, pp. 3350-3364, vol. 20, no. 12, 2011.

2. A. K. Moorthy and A. C. Bovik, `` A Two-step Framework for Constructing Blind Image Quality Indices ". IEEE Signal Processing Letters, pp. 587-599, vol. 17, no. 5, May 2010.

3. A. K. Moorthy and A. C. Bovik, `` A Two-stage Framework for Blind Image Quality Assessment ". IEEE International Conference on Image Processing (ICIP). September 2010.

4. A. K. Moorthy and A. C. Bovik, `` Statistics of Natural Image Distortions ". IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). March 2010.

BLind Image Integrity Notator using DCT-Statistics (BLIINDS)

BLIINDS is an efficient, general-purpose, non- distortion specific, blind/no-reference image quality assessment (NR-IQA) algorithm that uses natural scene statistics models of discrete cosine transform (DCT) coefficients to perform distortion-agnostic NR IQA.

We derive a generalized NSS-based model of local DCT coefficients, and transform the model parameters into features suitable for perceptual image quality score prediction. The statistics of the DCT features vary in a natural and predictable manner as the image quality changes. A generalized probabilistic model is applied to these features, and used to make probabilistic predictions of visual quality. We show that the method correlates highly with human subjective judgements of quality.

The contributions of our approach are as follows: 1) The proposed method inherits the advantages of the NSS approach to IQA. While the goal of IQA research is to produce algorithms that accord with human visual perception of quality, one can to some degree avoid modeling poorly understood functions of the human visual system (HVS), and resort to deriving models of the natural environment instead. 2) BLIINDS is non-distortion specific; while most NR-IQA algorithms quantify a specific type of distortion, the features used in our algorithm are derived independently of the type of distortion of the image and are effective across multiple distortion types. Consequently, it can be deployed in a wide range of applications. 3) We propose a novel model for the statistics of DCT coefficients. 4) Since the framework operates entirely in the DCT domain, one can take exploit the availability of platforms devised for the fast computation of DCT transforms. 5) The method requires minimal training, and relies on a simple probabilistic model for quality score prediction. This leads to further computational gains. 6) Finally, the method correlates highly with human visual perception of quality and yields highly competitive performance, even with respect to state-of-the-art FR-IQA algorithms.

Relevant Publications:

1. M. A. Saad, A. C. Bovik and C. Charrier, `` Model-Based Blind Image Quality Assessment: A natural scene statistics approach in the DCT domain' '', IEEE Transactions Image Processing, pp. 3339-3352, vol. 21, no. 8, 2012.

2. M. A. Saad, A. C. Bovik and C. Charrier, `` DCT Statistics Model-based Blind Image Quality Assessment '', IEEE International Conference on Image Processing (ICIP). September 2011.

3. M. A. Saad, A. C. Bovik and C. Charrier, ` `A DCT Statistics-Based Blind Image Quality Index ", IEEE Signal Processing Letters, pp. 583-586, vol. 17, no. 6, June 2010.

4. M. A. Saad, A. C. Bovik and C. Charrier, `` Natural DCT statistics approach to no-reference image quality assessment '', IEEE International Conference on Image Processing (ICIP). September 2010.

No-Reference Quality Assessment algorithm for Block-Based compression artifacts

Perhaps the most common distortion type that one comes across in real-world applications is the distortion introduced by lossy compression algorithms, such as JPEG (for images) or MPEG/H.263 (for videos). These compression algorithms are based on reduction of spatial redundancies using the block-based Discrete Cosine Transform (DCT). When these algorithms are constrained to increase the amount of compression, a visible 'blocking' artifact can be seen.

Blocking resulting from DCT based compression algorithms running at low bit rates has a very regular profile. It manifests itself as an edge every 8 pixels (for the typical block-size of 8 x 8 pixels), oriented in the horizontal and vertical directions. The strength of the blocking artifact can be measured by estimating the strength of these block-edges. At LIVE, we have developed frequency domain algorithms for measuring blocking artifact in images compressed by JPEG, with the algorithm having no information about the reference image.

Relevant Publications

  1. Z. Wang, H. R. Sheikh and A. C. Bovik, "No-reference perceptual quality assessment of JPEG compressed images", Proc. IEEE International Conference on Image Processing , September 2002.
  2. L. Lu, Z. Wang, A. C. Bovik and J. Kouloheris, "Full-Reference Video Quality Assessment Considering Structural Distortion and No-Reference Quality Evaluation of MPEG Video", Proc. IEEE International Conference on Multimedia and Expo , August 2002.
  3. S. Liu and A. C. Bovik, "DCT domain blind measurement of blocking artifacts in DCT-coded images", Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing , May 2001.
  4. Z. Wang, A. C. Bovik, and B. L. Evans, "Blind measurement of blocking artifacts in images", Proc. IEEE International Conference on Image Processing , September 2000.

No-Reference Quality Assessment for JPEG2000 Compressed Images using Natural Scene Statistics.

Not all compression algorithms are block-based. Recent research in image and video coding algorithms has revealed that a greater compression can be achieved for the same visual quality if the block-based DCT approach is replaced by a Discrete Wavelet Transform (DWT). JPEG2000 is a recent image compression standard that uses DWT for image compression. However, DWT based algorithms also suffer from artifacts at low bit rates, specifically, from blurring and ringing artifacts. Blurring and ringing artifacts are image dependent, unlike the blocking artifact, whose spatial location is predictable. This makes the task of quantifying distortion resulting from DWT based compression algorithms (such as the JPEG2000) much harder to quantify. At LIVE we have proposed a unique and innovative solution to the problem. We propose to use Natural Scene Statistics models to quantify the departure of a distorted image from "expected" natural behavior.

Relevant Publications

  1. H. R. Sheikh, A. C. Bovik, and L. K. Cormack, "No-Reference Quality Assessment Using Natural Scene Statistics: JPEG2000," IEEE Transactions on Image Processing , vol. 14, no. 12, December 2005.
  2. H. R. Sheikh, A. C. Bovik, and L. Cormack, "Blind Quality Assessment of JPEG2000 Compressed Images Using Natural Scene Statistics," Proc. IEEE Asilomar Conf. on Signals, Systems, and Computers , Nov. 2003.
  3. H.R. Sheikh, Z. Wang, L. K. Cormack and A.C. Bovik, "Blind quality assessment for JPEG2000 compressed images," Proc. Thirty-Sixth Annual Asilomar Conference on Signals, Systems, and Computers , November 2002.

Back to Quality Assessment Research page