Towards End-to-end Video-based Eye Tracking

Related tags

Deep LearningEVE
Overview

Towards End-to-end Video-based Eye Tracking

The code accompanying our ECCV 2020 publication and dataset, EVE.

Setup

Preferably, setup a Docker image or virtual environment (virtualenvwrapper is recommended) for this repository. Please note that we have tested this code-base in the following environments:

  • Ubuntu 18.04 / A Linux-based cluster system (CentOS 7.8)
  • Python 3.6 / Python 3.7
  • PyTorch 1.5.1

Clone this repository somewhere with:

git clone [email protected]:swook/EVE
cd EVE/

Then from the base directory of this repository, install all dependencies with:

pip install -r requirements.txt

Please note the PyTorch official installation guide for setting up the torch and torchvision packages on your specific system.

You will also need to setup ffmpeg for video decoding. On Linux, we recommend installing distribution-specific packages (usually named ffmpeg). If necessary, check out the official download page or compilation instructions.

Usage

Information on the code framework

Configuration file system

All available configuration parameters are defined in src/core/config_default.py.

In order to override the default values, one can do:

  1. Pass the parameter via a command-line parameter to train.py or inference.py. Note that in this case, replace all _ characters with -. E.g. the config. parameter refine_net_enabled becomes --refine-net-enabled 1. Note that boolean parameters can be passed in via either 0/no/false or 1/yes/true.
  2. Create a JSON file such as src/configs/eye_net.json or src/configs/refine_net.json.

The order of application are:

  1. Default parameters
  2. JSON-provided parameters, in order of JSON file declaration. For instance, in the command python train.py config1.json config2.json, config2.json overrides config1.json entries should there be any overlap.
  3. CLI-provided parameters.

Automatic logging to Google Sheets

This framework implements an automatic logging code of all parameters, loss terms, and metrics to a Google Sheets document. This is done by the gspread library. To enable this possibility, follow these instructions:

  1. Follow the instructions at https://gspread.readthedocs.io/en/latest/oauth2.html#for-end-users-using-oauth-client-id
  2. Set --gsheet-secrets-json-file to a path to the credentials JSON file, and set --gsheet-workbook-key to the document key. This key is the part after https://docs.google.com/spreadsheets/d/ and before any query or hash parameters.

An example config JSON file can be found at src/configs/sample_gsheet.json.

Training a model

To train a model, simply run python train.py from src/ with the appropriate configuration changes that are desired (see "Configuration file system" above).

Note, that in order to resume the training of an existing model you must provide the path to the output folder via the --resume-from argument.

Also, at every fresh run of train.py, a unique identifier is generated to produce a unique output folder in outputs/EVE/. Hence, it is recommended to use the Google Sheets logging feature (see "Automatic logging to Google Sheets") to keep track of your models.

Running inference

The single-sample inference script at src/inference.py takes in the same arguments as train.py but expects two arguments in particular:

  • --input-path is the path to a basler.mp4 or webcam_l.mp4 or webcam_c.mp4 or webcam_r.mp4 that exists in the EVE dataset.
  • --output-path is a path to a desired output location (ending in .mp4).

This script works for both training, validation, and test samples and shows the reference point-of-gaze ground-truth when available.

Citation

If using this code-base and/or the EVE dataset in your research, please cite the following publication:

@inproceedings{Park2020ECCV,
  author    = {Seonwook Park and Emre Aksan and Xucong Zhang and Otmar Hilliges},
  title     = {Towards End-to-end Video-based Eye-Tracking},
  year      = {2020},
  booktitle = {European Conference on Computer Vision (ECCV)}
}

Q&A

Q: How do I use this code for screen-based eye tracking?

A: This code does not offer actual eye tracking. Rather, it concerns the benchmarking of the video-based gaze estimation methods outlined in the original paper. Extending this code to support an easy-to-use software for screen-based eye tracking is somewhat non-trivial, due to requirements on camera calibration (intrinsics, extrinsics), and an efficient pipeline for accurate and stable real-time eye or face patch extraction. Thus, we consider this to be beyond the scope of this code repository.

Q: Where are the test set labels?

A: Our public evaluation server and leaderboard are hosted by Codalab at https://competitions.codalab.org/competitions/28954. This allows for evaluations on our test set to be consistent and reliable, and encourage competition in the field of video-based gaze estimation. Please note that the performance reported by Codalab is not strictly speaking comparable to the original paper's results, as we only perform evaluation on a large subset of the full test set. We recommend acquiring the updated performance figures from the leaderboard.

Comments
  • use against new dataset

    use against new dataset

    Hi,

    Can this code be used at inference time against in-the-wild mp4 that do not necessarily provide an accompanying H5? The more I work with this codebase, the more it looks obvious that w/o the mp4 being TOBII generated, this will not work. Is this true?

    thank you

    opened by inisar 0
  • File name parser

    File name parser

    File name parser can be made more robust to your own dataset files.
    Currently doesn't work for both webcam_l.mp4 and webcam_l_eyes.mp4 Please see below for filename and correction I made to make it work. src/core/inference.py try: camera_type = components[-1][:-4] except AssertionError: camera_type = camera_type[:-5]

    opened by inisar 0
  • How to synchronize the data from camera and eye tracker?

    How to synchronize the data from camera and eye tracker?

    Hi, @swook . I use OpenCV to capture the frames, what borthers me is that I don't know how to attach a timestamp to each frame and ensure the interval of each timestamp nearly the same. By using the datetime.time(), I can get the current time and regard it as the timestamp, but the interval between each of the timestamps seems to be different and has a big gap. So could you share me some details about your method which is used to synchronize the data?Or It would be very nice if you can share the source code or your method with me. Thanks.

    opened by Kihensarn 0
  • How to get the 3D gaze origin

    How to get the 3D gaze origin

    Hi, @swook Thanks for your great job, but I have a question about how to get the 3D gaze origin(determined during data pre-processing). The paper said "In pre-processing the EVEdataset, we apply a 3DMM fitting approach with interocular-distance-based scale-normalization to alleviate these issues" . However, I'm not sure about the specific process of this step. What should I do if I want to convert from landmark to 3D gaze origin? Besides, if it is possible to open some code of this part? Thanks a lot!

    opened by TeresaKumo 0
  • About the result

    About the result

    I trained the eve model with eve data, ran eval_codalab.py and got pkl file as a result. I also ran eval_codalabl.py and got pkl file from the pretrained model weights(from https://github.com/swook/EVE/releases/tag/v0.0 - eve_refinenet_CGRU_oa_skip.pt) Then, I compared these two results and the numbers seem to match. For example, from the pretrained model, I got [960. 540.] for PoG_px_final and got [963.0835 650.5635] for my model.

    However, in the eve paper, table3 shows that the PoG_px in GRU model with oa+skip is 95.59 Numbers in paper is 1/10 of the numbers i got from eval_codalab and not sure what went wrong. Are they supposed to match? If they are not supposed to match, how do you calculate the numbers?

    Also, in the result page of codalab, the gaze direction(angular error) is shown, but the eval_codalab.py doesn't store gaze direction. (Keys_to_store=['left pupil size' , 'right pupil', 'pog__px_initial', 'pog_px_final', 'timestamp']) How should I get gaze direction error in degree?

    opened by chaeyoun 1
Owner
Seonwook Park
Seonwook Park
Code for the paper Relation Prediction as an Auxiliary Training Objective for Improving Multi-Relational Graph Representations (AKBC 2021).

Relation Prediction as an Auxiliary Training Objective for Knowledge Base Completion This repo provides the code for the paper Relation Prediction as

Facebook Research 85 Jan 02, 2023
ColossalAI-Examples - Examples of training models with hybrid parallelism using ColossalAI

ColossalAI-Examples This repository contains examples of training models with Co

HPC-AI Tech 185 Jan 09, 2023
TGRNet: A Table Graph Reconstruction Network for Table Structure Recognition

TGRNet: A Table Graph Reconstruction Network for Table Structure Recognition Xue, Wenyuan, et al. "TGRNet: A Table Graph Reconstruction Network for Ta

Wenyuan 68 Jan 04, 2023
Voice of Pajlada with model and weights.

Pajlada TTS Stripped down version of ForwardTacotron (https://github.com/as-ideas/ForwardTacotron) with pretrained weights for Pajlada's (https://gith

6 Sep 03, 2021
A collection of metrics for evaluating timbre dissimilarity using the TorchMetrics API

Timbre Dissimilarity Metrics A collection of metrics for evaluating timbre dissimilarity using the TorchMetrics API Installation pip install -e . Usag

Ben Hayes 21 Jan 05, 2022
An alarm clock coded in Python 3 with Tkinter

Tkinter-Alarm-Clock An alarm clock coded in Python 3 with Tkinter. Run python3 Tkinter Alarm Clock.py in a terminal if you have Python 3. NOTE: This p

CodeMaster7000 1 Dec 25, 2021
A tutorial on DataFrames.jl prepared for JuliaCon2021

JuliaCon2021 DataFrames.jl Tutorial This is a tutorial on DataFrames.jl prepared for JuliaCon2021. A video recording of the tutorial is available here

Bogumił Kamiński 106 Jan 09, 2023
A Jinja extension (compatible with Flask and other frameworks) to compile and/or compress your assets.

A Jinja extension (compatible with Flask and other frameworks) to compile and/or compress your assets.

Jayson Reis 94 Nov 21, 2022
Playing around with FastAPI and streamlit to create a YoloV5 object detector

FastAPI-Streamlit-based-YoloV5-detector Playing around with FastAPI and streamlit to create a YoloV5 object detector It turns out that a User Interfac

2 Jan 20, 2022
Github project for Attention-guided Temporal Coherent Video Object Matting.

Attention-guided Temporal Coherent Video Object Matting This is the Github project for our paper Attention-guided Temporal Coherent Video Object Matti

71 Dec 19, 2022
The official repository for Deep Image Matting with Flexible Guidance Input

FGI-Matting The official repository for Deep Image Matting with Flexible Guidance Input. Paper: https://arxiv.org/abs/2110.10898 Requirements easydict

Hang Cheng 51 Nov 10, 2022
Husein pet projects in here!

project-suka-suka Husein pet projects in here! List of projects mysejahtera-density. Generate resolution points using meshgrid and request each points

HUSEIN ZOLKEPLI 47 Dec 09, 2022
最新版本yolov5+deepsort目标检测和追踪,支持5.0版本可训练自己数据集

使用YOLOv5+Deepsort实现车辆行人追踪和计数,代码封装成一个Detector类,更容易嵌入到自己的项目中。

422 Dec 30, 2022
Colossal-AI: A Unified Deep Learning System for Large-Scale Parallel Training

ColossalAI An integrated large-scale model training system with efficient parallelization techniques Installation PyPI pip install colossalai Install

HPC-AI Tech 7.1k Jan 03, 2023
Functional TensorFlow Implementation of Singular Value Decomposition for paper Fast Graph Learning

tf-fsvd TensorFlow Implementation of Functional Singular Value Decomposition for paper Fast Graph Learning with Unique Optimal Solutions Cite If you f

Sami Abu-El-Haija 14 Nov 25, 2021
Dogs classification with Deep Metric Learning using some popular losses

Tsinghua Dogs classification with Deep Metric Learning 1. Introduction Tsinghua Dogs dataset Tsinghua Dogs is a fine-grained classification dataset fo

QuocThangNguyen 45 Nov 09, 2022
Scenic: A Jax Library for Computer Vision and Beyond

Scenic Scenic is a codebase with a focus on research around attention-based models for computer vision. Scenic has been successfully used to develop c

Google Research 1.6k Dec 27, 2022
TensorFlow ROCm port

Documentation TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, a

ROCm Software Platform 622 Jan 09, 2023
Neural network for stock price prediction

neural_network_for_stock_price_prediction Neural networks for stock price predic

2 Feb 04, 2022
LogAvgExp - Pytorch Implementation of LogAvgExp

LogAvgExp - Pytorch Implementation of LogAvgExp for Pytorch Install $ pip instal

Phil Wang 31 Oct 14, 2022