Towards End-to-end Video-based Eye Tracking

Related tags

Deep LearningEVE
Overview

Towards End-to-end Video-based Eye Tracking

The code accompanying our ECCV 2020 publication and dataset, EVE.

Setup

Preferably, setup a Docker image or virtual environment (virtualenvwrapper is recommended) for this repository. Please note that we have tested this code-base in the following environments:

  • Ubuntu 18.04 / A Linux-based cluster system (CentOS 7.8)
  • Python 3.6 / Python 3.7
  • PyTorch 1.5.1

Clone this repository somewhere with:

git clone [email protected]:swook/EVE
cd EVE/

Then from the base directory of this repository, install all dependencies with:

pip install -r requirements.txt

Please note the PyTorch official installation guide for setting up the torch and torchvision packages on your specific system.

You will also need to setup ffmpeg for video decoding. On Linux, we recommend installing distribution-specific packages (usually named ffmpeg). If necessary, check out the official download page or compilation instructions.

Usage

Information on the code framework

Configuration file system

All available configuration parameters are defined in src/core/config_default.py.

In order to override the default values, one can do:

  1. Pass the parameter via a command-line parameter to train.py or inference.py. Note that in this case, replace all _ characters with -. E.g. the config. parameter refine_net_enabled becomes --refine-net-enabled 1. Note that boolean parameters can be passed in via either 0/no/false or 1/yes/true.
  2. Create a JSON file such as src/configs/eye_net.json or src/configs/refine_net.json.

The order of application are:

  1. Default parameters
  2. JSON-provided parameters, in order of JSON file declaration. For instance, in the command python train.py config1.json config2.json, config2.json overrides config1.json entries should there be any overlap.
  3. CLI-provided parameters.

Automatic logging to Google Sheets

This framework implements an automatic logging code of all parameters, loss terms, and metrics to a Google Sheets document. This is done by the gspread library. To enable this possibility, follow these instructions:

  1. Follow the instructions at https://gspread.readthedocs.io/en/latest/oauth2.html#for-end-users-using-oauth-client-id
  2. Set --gsheet-secrets-json-file to a path to the credentials JSON file, and set --gsheet-workbook-key to the document key. This key is the part after https://docs.google.com/spreadsheets/d/ and before any query or hash parameters.

An example config JSON file can be found at src/configs/sample_gsheet.json.

Training a model

To train a model, simply run python train.py from src/ with the appropriate configuration changes that are desired (see "Configuration file system" above).

Note, that in order to resume the training of an existing model you must provide the path to the output folder via the --resume-from argument.

Also, at every fresh run of train.py, a unique identifier is generated to produce a unique output folder in outputs/EVE/. Hence, it is recommended to use the Google Sheets logging feature (see "Automatic logging to Google Sheets") to keep track of your models.

Running inference

The single-sample inference script at src/inference.py takes in the same arguments as train.py but expects two arguments in particular:

  • --input-path is the path to a basler.mp4 or webcam_l.mp4 or webcam_c.mp4 or webcam_r.mp4 that exists in the EVE dataset.
  • --output-path is a path to a desired output location (ending in .mp4).

This script works for both training, validation, and test samples and shows the reference point-of-gaze ground-truth when available.

Citation

If using this code-base and/or the EVE dataset in your research, please cite the following publication:

@inproceedings{Park2020ECCV,
  author    = {Seonwook Park and Emre Aksan and Xucong Zhang and Otmar Hilliges},
  title     = {Towards End-to-end Video-based Eye-Tracking},
  year      = {2020},
  booktitle = {European Conference on Computer Vision (ECCV)}
}

Q&A

Q: How do I use this code for screen-based eye tracking?

A: This code does not offer actual eye tracking. Rather, it concerns the benchmarking of the video-based gaze estimation methods outlined in the original paper. Extending this code to support an easy-to-use software for screen-based eye tracking is somewhat non-trivial, due to requirements on camera calibration (intrinsics, extrinsics), and an efficient pipeline for accurate and stable real-time eye or face patch extraction. Thus, we consider this to be beyond the scope of this code repository.

Q: Where are the test set labels?

A: Our public evaluation server and leaderboard are hosted by Codalab at https://competitions.codalab.org/competitions/28954. This allows for evaluations on our test set to be consistent and reliable, and encourage competition in the field of video-based gaze estimation. Please note that the performance reported by Codalab is not strictly speaking comparable to the original paper's results, as we only perform evaluation on a large subset of the full test set. We recommend acquiring the updated performance figures from the leaderboard.

Comments
  • use against new dataset

    use against new dataset

    Hi,

    Can this code be used at inference time against in-the-wild mp4 that do not necessarily provide an accompanying H5? The more I work with this codebase, the more it looks obvious that w/o the mp4 being TOBII generated, this will not work. Is this true?

    thank you

    opened by inisar 0
  • File name parser

    File name parser

    File name parser can be made more robust to your own dataset files.
    Currently doesn't work for both webcam_l.mp4 and webcam_l_eyes.mp4 Please see below for filename and correction I made to make it work. src/core/inference.py try: camera_type = components[-1][:-4] except AssertionError: camera_type = camera_type[:-5]

    opened by inisar 0
  • How to synchronize the data from camera and eye tracker?

    How to synchronize the data from camera and eye tracker?

    Hi, @swook . I use OpenCV to capture the frames, what borthers me is that I don't know how to attach a timestamp to each frame and ensure the interval of each timestamp nearly the same. By using the datetime.time(), I can get the current time and regard it as the timestamp, but the interval between each of the timestamps seems to be different and has a big gap. So could you share me some details about your method which is used to synchronize the data?Or It would be very nice if you can share the source code or your method with me. Thanks.

    opened by Kihensarn 0
  • How to get the 3D gaze origin

    How to get the 3D gaze origin

    Hi, @swook Thanks for your great job, but I have a question about how to get the 3D gaze origin(determined during data pre-processing). The paper said "In pre-processing the EVEdataset, we apply a 3DMM fitting approach with interocular-distance-based scale-normalization to alleviate these issues" . However, I'm not sure about the specific process of this step. What should I do if I want to convert from landmark to 3D gaze origin? Besides, if it is possible to open some code of this part? Thanks a lot!

    opened by TeresaKumo 0
  • About the result

    About the result

    I trained the eve model with eve data, ran eval_codalab.py and got pkl file as a result. I also ran eval_codalabl.py and got pkl file from the pretrained model weights(from https://github.com/swook/EVE/releases/tag/v0.0 - eve_refinenet_CGRU_oa_skip.pt) Then, I compared these two results and the numbers seem to match. For example, from the pretrained model, I got [960. 540.] for PoG_px_final and got [963.0835 650.5635] for my model.

    However, in the eve paper, table3 shows that the PoG_px in GRU model with oa+skip is 95.59 Numbers in paper is 1/10 of the numbers i got from eval_codalab and not sure what went wrong. Are they supposed to match? If they are not supposed to match, how do you calculate the numbers?

    Also, in the result page of codalab, the gaze direction(angular error) is shown, but the eval_codalab.py doesn't store gaze direction. (Keys_to_store=['left pupil size' , 'right pupil', 'pog__px_initial', 'pog_px_final', 'timestamp']) How should I get gaze direction error in degree?

    opened by chaeyoun 1
Owner
Seonwook Park
Seonwook Park
SimplEx - Explaining Latent Representations with a Corpus of Examples

SimplEx - Explaining Latent Representations with a Corpus of Examples Code Author: Jonathan Crabbé ( Jonathan Crabbé 14 Dec 15, 2022

Official code for: A Probabilistic Hard Attention Model For Sequentially Observed Scenes

"A Probabilistic Hard Attention Model For Sequentially Observed Scenes" Authors: Samrudhdhi Rangrej, James Clark Accepted to: BMVC'21 A recurrent atte

5 Nov 19, 2022
A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis

A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis Project Page | Paper A Shading-Guided Generative Implicit Model

Xingang Pan 115 Dec 18, 2022
To prepare an image processing model to classify the type of disaster based on the image dataset

Disaster Classificiation using CNNs bunnysaini/Disaster-Classificiation Goal To prepare an image processing model to classify the type of disaster bas

Bunny Saini 1 Jan 24, 2022
Machine learning, in numpy

numpy-ml Ever wish you had an inefficient but somewhat legible collection of machine learning algorithms implemented exclusively in NumPy? No? Install

David Bourgin 11.6k Dec 30, 2022
SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning

SHRIMP: Sparser Random Feature Models via Iterative Magnitude Pruning This repository is the official implementation of "SHRIMP: Sparser Random Featur

Bobby Shi 0 Dec 16, 2021
Code release for NeX: Real-time View Synthesis with Neural Basis Expansion

NeX: Real-time View Synthesis with Neural Basis Expansion Project Page | Video | Paper | COLAB | Shiny Dataset We present NeX, a new approach to novel

538 Jan 09, 2023
Rethinking Transformer-based Set Prediction for Object Detection

Rethinking Transformer-based Set Prediction for Object Detection Here are the code for the ICCV paper. The code is adapted from Detectron2 and AdelaiD

Zhiqing Sun 62 Dec 03, 2022
Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks

Adversarially-Robust-Periphery Code + Data from the paper "Finding Biological Plausibility for Adversarially Robust Features via Metameric Tasks" by A

Anne Harrington 2 Feb 07, 2022
Simulator for FRC 2022 challenge: Rapid React

rrsim Simulator for FRC 2022 challenge: Rapid React out-1.mp4 Usage In order to run the simulator use the following: python3 rrsim.py [config_path] wh

1 Jan 18, 2022
BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation

BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation This is a demo implementation of BYOL for Audio (BYOL-A), a self-sup

NTT Communication Science Laboratories 160 Jan 04, 2023
Twins: Revisiting the Design of Spatial Attention in Vision Transformers

Twins: Revisiting the Design of Spatial Attention in Vision Transformers Very recently, a variety of vision transformer architectures for dense predic

482 Dec 18, 2022
Unsupervised Foreground Extraction via Deep Region Competition

Unsupervised Foreground Extraction via Deep Region Competition [Paper] [Code] The official code repository for NeurIPS 2021 paper "Unsupervised Foregr

28 Nov 06, 2022
A Neural Net Training Interface on TensorFlow, with focus on speed + flexibility

Tensorpack is a neural network training interface based on TensorFlow. Features: It's Yet Another TF high-level API, with speed, and flexibility built

Tensorpack 6.2k Jan 09, 2023
Speech Enhancement Generative Adversarial Network Based on Asymmetric AutoEncoder

ASEGAN: Speech Enhancement Generative Adversarial Network Based on Asymmetric AutoEncoder 中文版简介 Readme with English Version 介绍 基于SEGAN模型的改进版本,使用自主设计的非

Nitin 53 Nov 17, 2022
Motion and Shape Capture from Sparse Markers

MoSh++ This repository contains the official chumpy implementation of mocap body solver used for AMASS: AMASS: Archive of Motion Capture as Surface Sh

Nima Ghorbani 135 Dec 23, 2022
The fastai deep learning library

Welcome to fastai fastai simplifies training fast and accurate neural nets using modern best practices Important: This documentation covers fastai v2,

fast.ai 23.2k Jan 07, 2023
PyTorch implementation of Progressive Growing of GANs for Improved Quality, Stability, and Variation.

PyTorch implementation of Progressive Growing of GANs for Improved Quality, Stability, and Variation. Warning: the master branch might collapse. To ob

559 Dec 14, 2022
Segmentation for medical image.

EfficientSegmentation Introduction EfficientSegmentation is an open source, PyTorch-based segmentation framework for 3D medical image. Features A whol

68 Nov 28, 2022
ToFFi - Toolbox for Frequency-based Fingerprinting of Brain Signals

ToFFi Toolbox This repository contains "before peer review" version of the software related to the preprint of the publication ToFFi - Toolbox for Fre

4 Aug 31, 2022