ut ut

Laboratory for Image & Video Engineering

 

GAFFE: A Gaze-Attentive Fixation Finding Engine

Umesh Rajashekar,Ian van der Linde, Alan C. Bovik, and Lawrence K. Cormack
Center for Perceptual Systems and The Lab. for Image & Video Engineering, The University of Texas at Austin


Introduction

GAFFE is a gaze-attentive fixation finding engine that uses a bottom-up modality for fixation selection in natural scenes. GAFFE uses a data-driven framework where eye tracking was first used to evaluate the contributions of four foveated low-level image features in drawing fixations of observers. In particular, using the DOVES database, we recorded the eye movements of 29 observers as they viewed 101 calibrated natural, and studied the statistics of four low-level local image features: luminance, contrast,and bandpass outputs of both luminance and contrast, and discovered that image patches around human fixations had, on average, higher values of each of these features than image patches selected at random. Using these measurements, we developed a new algorithm that selects image regions as likely candidates for fixation.

We have decided to make the matlab code for GAFFE available to the research community free of charge. If you use this code in your research, we kindly ask that you reference this website and the following publications:

  • U. Rajashekar, I. van der Linde, A. C. Bovik, and L. K. Cormack, "GAFFE: A Gaze-Attentive Fixation Finding Engine ", To appear in Trans. Image Processing, 2008, URL: http://live.ece.utexas.edu/research/gaffe.

Please email Umesh Rajashekar (umesh.rajashekar@gmail.com) for any comments/suggestions related to this code..



Using GAFFE

  1. Download the Matlab code for GAFFE here. If you use the code provided here, it is assumed that you agree to this copyright agreement.
  2. In addition you will need the Space Variant Imaging Toolbox written by Jeff Perry to implement the foveation process at each stage of the algorithm.
  3. Please note that though GAFFE can be used to predict fixations for any image, the parameters were fixed assuming a viewing condition corresponding to a 1 pixel per arc minute.

Folders in the download include

  1. Code: This folder contains a few basic Matlab programs that will help you get started with GAFFE quickly.
    • gaffe_main.m: This program illustrates a simple example of how GAFFE predicts fixations in an image.
    • view_predicted_fixations.m: This program is useful to visually compare the predictions of GAFFE with the recorded fixations. The program creates a pseudo-dense map of the true fixations for an image (by replacing each fixation with a 2D gaussian window) and overlays the prediction from GAFFE on top of this map.
  2. PredictedFixations: This folder contains a set of 10 fixations as predicted by GAFFE for each of the 101 images in the DOVES database. See view_predicted_fixations.m for an example of how the data is stored. You might find this data useful to compare the performance of your algorithm with GAFFE.
  3. RecordedFixations: This folder contains an example of recorded fixations for one image in the DOVES database. The entire set of recorded fixations can be got by downloading the DOVES database.
  4. VanhatImages: This folder contains an example image from the DOVES database. The entire set of images used in the paper can be downloaded by downloading the DOVES database.

Example Predictions From GAFFE

  1. You can use gaffe_main.m to load an image (from the DOVES database) and predict fixation points

    A simple example of predicting 10 fixations using GAFFE on an image from the DOVES database

  2. You can then use view_predicted_fixations.m to see how well the fixations predicted by GAFFE ovelap with recorded fixations for that image

    The yellow dots represent the recorded fixations from the DOVES database for this image. An overlay of fixations predicted by GAFFE (red) on a pseudo-dense map of true fixations. Each of the recorded fixation was replaced by a 2D Gaussian to create the pseudo-dense map.

Copyright

-----------COPYRIGHT NOTICE STARTS WITH THIS LINE------------
Copyright (c) 2007 The University of Texas at Austin
All rights reserved.

Permission is hereby granted, without written agreement and without license or royalty fees, to use, copy, modify, and distribute this code and its documentation for any purpose, provided that the copyright notice in its entirety appear in all copies of this database, and the original source of this database, Laboratory for Image and Video Engineering (LIVE, http://live.ece.utexas.edu) and Center for Perceptual Systems (CPS, http://www.cps.utexas.edu) at the University of Texas at Austin (UT Austin, http://www.utexas.edu), is acknowledged in any publication that reports research using this database. The use of this code is to be cited in the bibliography as:

  • U. Rajashekar, I. van der Linde, A. C. Bovik, and L. K. Cormack, "GAFFE: A Gaze-Attentive Fixation Finding Engine ", To appear in Trans. Image Processing, 2008, URL: http://live.ece.utexas.edu/research/gaffe.

IN NO EVENT SHALL THE UNIVERSITY OF TEXAS AT AUSTIN BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OF THIS DATABASE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF TEXAS AT AUSTIN HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

THE UNIVERSITY OF TEXAS AT AUSTIN SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE DATABASE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE UNIVERSITY OF TEXAS AT AUSTIN HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.

-----------COPYRIGHT NOTICE ENDS WITH THIS LINE------------

 

StatCounter - Free Web Tracker and Counter