Towards End-to-end Video-based Eye Tracking

Related tags

Deep LearningEVE
Overview

Towards End-to-end Video-based Eye Tracking

The code accompanying our ECCV 2020 publication and dataset, EVE.

Setup

Preferably, setup a Docker image or virtual environment (virtualenvwrapper is recommended) for this repository. Please note that we have tested this code-base in the following environments:

  • Ubuntu 18.04 / A Linux-based cluster system (CentOS 7.8)
  • Python 3.6 / Python 3.7
  • PyTorch 1.5.1

Clone this repository somewhere with:

git clone [email protected]:swook/EVE
cd EVE/

Then from the base directory of this repository, install all dependencies with:

pip install -r requirements.txt

Please note the PyTorch official installation guide for setting up the torch and torchvision packages on your specific system.

You will also need to setup ffmpeg for video decoding. On Linux, we recommend installing distribution-specific packages (usually named ffmpeg). If necessary, check out the official download page or compilation instructions.

Usage

Information on the code framework

Configuration file system

All available configuration parameters are defined in src/core/config_default.py.

In order to override the default values, one can do:

  1. Pass the parameter via a command-line parameter to train.py or inference.py. Note that in this case, replace all _ characters with -. E.g. the config. parameter refine_net_enabled becomes --refine-net-enabled 1. Note that boolean parameters can be passed in via either 0/no/false or 1/yes/true.
  2. Create a JSON file such as src/configs/eye_net.json or src/configs/refine_net.json.

The order of application are:

  1. Default parameters
  2. JSON-provided parameters, in order of JSON file declaration. For instance, in the command python train.py config1.json config2.json, config2.json overrides config1.json entries should there be any overlap.
  3. CLI-provided parameters.

Automatic logging to Google Sheets

This framework implements an automatic logging code of all parameters, loss terms, and metrics to a Google Sheets document. This is done by the gspread library. To enable this possibility, follow these instructions:

  1. Follow the instructions at https://gspread.readthedocs.io/en/latest/oauth2.html#for-end-users-using-oauth-client-id
  2. Set --gsheet-secrets-json-file to a path to the credentials JSON file, and set --gsheet-workbook-key to the document key. This key is the part after https://docs.google.com/spreadsheets/d/ and before any query or hash parameters.

An example config JSON file can be found at src/configs/sample_gsheet.json.

Training a model

To train a model, simply run python train.py from src/ with the appropriate configuration changes that are desired (see "Configuration file system" above).

Note, that in order to resume the training of an existing model you must provide the path to the output folder via the --resume-from argument.

Also, at every fresh run of train.py, a unique identifier is generated to produce a unique output folder in outputs/EVE/. Hence, it is recommended to use the Google Sheets logging feature (see "Automatic logging to Google Sheets") to keep track of your models.

Running inference

The single-sample inference script at src/inference.py takes in the same arguments as train.py but expects two arguments in particular:

  • --input-path is the path to a basler.mp4 or webcam_l.mp4 or webcam_c.mp4 or webcam_r.mp4 that exists in the EVE dataset.
  • --output-path is a path to a desired output location (ending in .mp4).

This script works for both training, validation, and test samples and shows the reference point-of-gaze ground-truth when available.

Citation

If using this code-base and/or the EVE dataset in your research, please cite the following publication:

@inproceedings{Park2020ECCV,
  author    = {Seonwook Park and Emre Aksan and Xucong Zhang and Otmar Hilliges},
  title     = {Towards End-to-end Video-based Eye-Tracking},
  year      = {2020},
  booktitle = {European Conference on Computer Vision (ECCV)}
}

Q&A

Q: How do I use this code for screen-based eye tracking?

A: This code does not offer actual eye tracking. Rather, it concerns the benchmarking of the video-based gaze estimation methods outlined in the original paper. Extending this code to support an easy-to-use software for screen-based eye tracking is somewhat non-trivial, due to requirements on camera calibration (intrinsics, extrinsics), and an efficient pipeline for accurate and stable real-time eye or face patch extraction. Thus, we consider this to be beyond the scope of this code repository.

Q: Where are the test set labels?

A: Our public evaluation server and leaderboard are hosted by Codalab at https://competitions.codalab.org/competitions/28954. This allows for evaluations on our test set to be consistent and reliable, and encourage competition in the field of video-based gaze estimation. Please note that the performance reported by Codalab is not strictly speaking comparable to the original paper's results, as we only perform evaluation on a large subset of the full test set. We recommend acquiring the updated performance figures from the leaderboard.

Comments
  • use against new dataset

    use against new dataset

    Hi,

    Can this code be used at inference time against in-the-wild mp4 that do not necessarily provide an accompanying H5? The more I work with this codebase, the more it looks obvious that w/o the mp4 being TOBII generated, this will not work. Is this true?

    thank you

    opened by inisar 0
  • File name parser

    File name parser

    File name parser can be made more robust to your own dataset files.
    Currently doesn't work for both webcam_l.mp4 and webcam_l_eyes.mp4 Please see below for filename and correction I made to make it work. src/core/inference.py try: camera_type = components[-1][:-4] except AssertionError: camera_type = camera_type[:-5]

    opened by inisar 0
  • How to synchronize the data from camera and eye tracker?

    How to synchronize the data from camera and eye tracker?

    Hi, @swook . I use OpenCV to capture the frames, what borthers me is that I don't know how to attach a timestamp to each frame and ensure the interval of each timestamp nearly the same. By using the datetime.time(), I can get the current time and regard it as the timestamp, but the interval between each of the timestamps seems to be different and has a big gap. So could you share me some details about your method which is used to synchronize the data?Or It would be very nice if you can share the source code or your method with me. Thanks.

    opened by Kihensarn 0
  • How to get the 3D gaze origin

    How to get the 3D gaze origin

    Hi, @swook Thanks for your great job, but I have a question about how to get the 3D gaze origin(determined during data pre-processing). The paper said "In pre-processing the EVEdataset, we apply a 3DMM fitting approach with interocular-distance-based scale-normalization to alleviate these issues" . However, I'm not sure about the specific process of this step. What should I do if I want to convert from landmark to 3D gaze origin? Besides, if it is possible to open some code of this part? Thanks a lot!

    opened by TeresaKumo 0
  • About the result

    About the result

    I trained the eve model with eve data, ran eval_codalab.py and got pkl file as a result. I also ran eval_codalabl.py and got pkl file from the pretrained model weights(from https://github.com/swook/EVE/releases/tag/v0.0 - eve_refinenet_CGRU_oa_skip.pt) Then, I compared these two results and the numbers seem to match. For example, from the pretrained model, I got [960. 540.] for PoG_px_final and got [963.0835 650.5635] for my model.

    However, in the eve paper, table3 shows that the PoG_px in GRU model with oa+skip is 95.59 Numbers in paper is 1/10 of the numbers i got from eval_codalab and not sure what went wrong. Are they supposed to match? If they are not supposed to match, how do you calculate the numbers?

    Also, in the result page of codalab, the gaze direction(angular error) is shown, but the eval_codalab.py doesn't store gaze direction. (Keys_to_store=['left pupil size' , 'right pupil', 'pog__px_initial', 'pog_px_final', 'timestamp']) How should I get gaze direction error in degree?

    opened by chaeyoun 1
Owner
Seonwook Park
Seonwook Park
Two-Stage Peer-Regularized Feature Recombination for Arbitrary Image Style Transfer

Two-Stage Peer-Regularized Feature Recombination for Arbitrary Image Style Transfer Paper on arXiv Public PyTorch implementation of two-stage peer-reg

NNAISENSE 38 Oct 14, 2022
This repository contains part of the code used to make the images visible in the article "How does an AI Imagine the Universe?" published on Towards Data Science.

Generative Adversarial Network - Generating Universe This repository contains part of the code used to make the images visible in the article "How doe

Davide Coccomini 9 Dec 18, 2022
A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning

CLEVR Dataset Generation This is the code used to generate the CLEVR dataset as described in the paper: CLEVR: A Diagnostic Dataset for Compositional

Facebook Research 503 Jan 04, 2023
The modify PyTorch version of Siam-trackers which are speed-up by TensorRT.

SiamTracker-with-TensorRT The modify PyTorch version of Siam-trackers which are speed-up by TensorRT or ONNX. [Updating...] Examples demonstrating how

9 Dec 13, 2022
Non-Vacuous Generalisation Bounds for Shallow Neural Networks

This package requires jax, tensorflow, and numpy. Either tensorflow or scikit-learn can be used for loading data. To run in a nix-shell with required

Felix Biggs 0 Feb 04, 2022
RANZCR-CLiP 7th Place Solution

RANZCR-CLiP 7th Place Solution This repository is WIP. (18 Mar 2021) Installation git clone https://github.com/analokmaus/kaggle-ranzcr-clip-public.gi

Hiroshechka Y 21 Oct 22, 2022
RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving

RTS3D: Real-time Stereo 3D Detection from 4D Feature-Consistency Embedding Space for Autonomous Driving (AAAI2021). RTS3D is efficiency and accuracy s

71 Nov 29, 2022
A library for researching neural networks compression and acceleration methods.

A library for researching neural networks compression and acceleration methods.

Intel Labs 100 Dec 29, 2022
Codes to pre-train T5 (Text-to-Text Transfer Transformer) models pre-trained on Japanese web texts

t5-japanese Codes to pre-train T5 (Text-to-Text Transfer Transformer) models pre-trained on Japanese web texts. The following is a list of models that

Kimio Kuramitsu 1 Dec 13, 2021
VACA: Designing Variational Graph Autoencoders for Interventional and Counterfactual Queries

VACA Code repository for the paper "VACA: Designing Variational Graph Autoencoders for Interventional and Counterfactual Queries (arXiv)". The impleme

Pablo Sánchez-Martín 16 Oct 10, 2022
A framework for joint super-resolution and image synthesis, without requiring real training data

SynthSR This repository contains code to train a Convolutional Neural Network (CNN) for Super-resolution (SR), or joint SR and data synthesis. The met

83 Jan 01, 2023
This is the code for our KILT leaderboard submission to the T-REx and zsRE tasks. It includes code for training a DPR model then continuing training with RAG.

KGI (Knowledge Graph Induction) for slot filling This is the code for our KILT leaderboard submission to the T-REx and zsRE tasks. It includes code fo

International Business Machines 72 Jan 06, 2023
《Train in Germany, Test in The USA: Making 3D Object Detectors Generalize》(CVPR 2020)

Train in Germany, Test in The USA: Making 3D Object Detectors Generalize This paper has been accpeted by Conference on Computer Vision and Pattern Rec

Xiangyu Chen 101 Jan 02, 2023
Meta Learning Backpropagation And Improving It (VSML)

Meta Learning Backpropagation And Improving It (VSML) This is research code for the NeurIPS 2021 publication Kirsch & Schmidhuber 2021. Many concepts

Louis Kirsch 22 Dec 21, 2022
OpenLT: An open-source project for long-tail classification

OpenLT: An open-source project for long-tail classification Supported Methods for Long-tailed Recognition: Cross-Entropy Loss Focal Loss (ICCV'17) Cla

Ming Li 37 Sep 15, 2022
MultiTaskLearning - Multi Task Learning for 3D segmentation

Multi Task Learning for 3D segmentation Perception stack of an Autonomous Drivin

2 Sep 22, 2022
This repository contains several jupyter notebooks to help users learn to use neon, our deep learning framework

neon_course This repository contains several jupyter notebooks to help users learn to use neon, our deep learning framework. For more information, see

Nervana 92 Jan 03, 2023
Official implementation of VaxNeRF (Voxel-Accelearated NeRF).

VaxNeRF Paper | Google Colab This is the official implementation of VaxNeRF (Voxel-Accelearated NeRF). VaxNeRF provides very fast training and slightl

naruya 132 Nov 21, 2022
sequitur is a library that lets you create and train an autoencoder for sequential data in just two lines of code

sequitur sequitur is a library that lets you create and train an autoencoder for sequential data in just two lines of code. It implements three differ

Jonathan Shobrook 305 Dec 21, 2022
🔥🔥High-Performance Face Recognition Library on PaddlePaddle & PyTorch🔥🔥

face.evoLVe: High-Performance Face Recognition Library based on PaddlePaddle & PyTorch Evolve to be more comprehensive, effective and efficient for fa

Zhao Jian 3.1k Jan 04, 2023